Recognizing from history the possibilities of where this all might lead, the prospect of any serious economic downturn being met with a widespread push of mass automation—paired with a regime overwhelmingly friendly to the tech and business class, and executing a campaign of oppression and prosecution of precarious manual and skilled laborers—well, it should make us all sit up and pay attention.
Doomsider@lemmy.world
on 05 Aug 03:01
nextcollapse
Ooowee, they are setting up the US for a major bust aren’t they. I guess all the wealthy people will just have to buy up everything when it becomes dirt cheap. Sucks to have to own everything I guess.
brucethemoose@lemmy.world
on 05 Aug 03:16
nextcollapse
Open models are going to kick the stool out. Hopefully.
GLM 4.5 is already #2 on lm arena, above Grok and ChatGPT, and runnable on homelab rigs, yet just 32B active (which is mad). Extrapolate that a bit, and it’s just a race to the zero-cost bottom. None of this is sustainable.
WorldsDumbestMan@lemmy.today
on 05 Aug 13:52
collapse
Qwen3 8B sorry, Idiot spelling. I use it to talk about problems when I have no internet or maxed out on Claude. I can rarely trust it with anything reasoning related, it’s faster and easier to do most things myself.
brucethemoose@lemmy.world
on 05 Aug 13:58
collapse
Yeah, 7B models are just not quite there.
There are tons of places to get free access to bigger models. I’d suggest Jamba, Kimi, Deepseek Chat, and Google AI Studio, and the new GLM chat app: chat.z.ai
And depending on your hardware, you can probably run better MoEs at the speed of 8Bs. Qwen3 30B is so much smarter its not even funny, and faster on CPU.
brucethemoose@lemmy.world
on 05 Aug 11:56
collapse
It’s going to be slow as molasses on ollama. It needs a better runtime, and GLM 4.5 probably isn’t supported at this moment anyway.
brucethemoose@lemmy.world
on 05 Aug 11:55
collapse
The full GLM? Basically a 3090 or 4090 and a budget EPYC CPU. Or maybe 2 GPUs on a threadripper system.
GLM Air? Now this would work on a 16GB+ VRAM desktop, just slap in 96GB+ (maybe 64GB?) of fast RAM. Or the recent Framework desktop, or any mini PC/laptop with the 128GB Ryzen 395 config, or a 128GB+ Mac.
You’d download the weights, quantize yourself if needed, and run them in ik_llama.cpp (which should get support imminently).
As someone who works with integrating AI- it’s failing badly.
At best, it’s good for transcription- at least until it hallucinates and adds things to your medical record that don’t exist. Which it does and when the providers don’t check for errors - which few do regularly- congrats- you now have a medical record of whatever it hallucinated today.
And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.
They can’t consistently do anything more complex without making errors- and most people are frankly too dumb or lazy to properly verify outputs. And that’s why this bubble is so huge.
and most people are frankly too dumb or lazy to properly verify outputs.
This is my main argument. I need to check the output for correctness anyways. Might as well do it in the first place then.
RagingRobot@lemmy.world
on 05 Aug 09:39
nextcollapse
People are happy to accept the wrong answer without even checking lol
mrvictory1@lemmy.world
on 05 Aug 19:54
nextcollapse
This is exactly why I love duckduckgo’s AI results built in to search. It appears when it is relevant (and yes you can nuke it from orbit so it never ever appears) and it always gives citations (2 websites) so I can go check if it is right or not. Sometimes it works wonders when regular search results are not relevant. Sometimes it fails hard. I can distinguish one from the other because I can always check the sources.
GhostTheToast@lemmy.world
on 06 Aug 03:09
collapse
Honestly I mostly use it as a jumping off point for my code or to help me sound more coherent when writing emails.
The fact that coding is a big corner of the use cases means that the tech sector is essentially high on their own supply.
Summarizing and aggregating data alone isn’t a substitute for the smoke and mirrors of confidence that is a consulting firm. It just makes the ones that can lean on branding able to charge more hours for the same output, and add “integrating AI” another bucket of vomit to fling.
interdimensionalmeme@lemmy.ml
on 05 Aug 10:08
nextcollapse
insurance companies, oh no, insurance companies !!! AArrrggghhh !!!
OctopusNemeses@lemmy.world
on 05 Aug 10:21
nextcollapse
I tried having it identify an unknown integrated circuit. It hallucinated a chip. It kept giving me non-existent datasheets and 404 links to digikey/mouser/etc.
vacuumflower@lemmy.sdf.org
on 05 Aug 10:37
nextcollapse
Well, from this description it’s still usable for things too complex to just do Monte-Carlo, but with possible verification of results. May even be efficient. But that seems narrow.
BTW, even ethical automated combat drones. I know that one word there seems out of place, but if we have an “AI” for target\trajectory\action suggestion, but something more complex\expensive for verification, ultimately with a human in charge, then it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).
it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).
But how does this work help next quarter’s profits?
vacuumflower@lemmy.sdf.org
on 06 Aug 07:13
collapse
If each unplanned death not result of operator’s mistake would lead to confiscation of one month’s profit (not margin), then I’d think it would help very much.
As someone who is actually an AI tool developer (I just use existing models) - it’s absolutely NOT failing.
Lemmy is ironically incredibly tech illiterate.
It can be working and good and still be a bubble - you know that right? A lot of AI is overvalued but to say it’s “failing badly” is absurd and really helps absolutely no one.
According to whom? No one’s running their instance here. I’m a software dev with over 20 years of foss experience and imo lemmy’s user base is somewhat illiterate bunch of contrarians when it comes to popular tech discussions.
We’re clearly not going to agree here without objective data so unless you’re willing to provide that have a good day, bye.
rhombus@sh.itjust.works
on 05 Aug 14:16
nextcollapse
And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.
This is what drives nuts the most about it. We had so many incredibly efficient, purpose-built tools using the same technologies (machine learning and neural networks) and we threw them away in favor of wildly inefficient, general-purpose LLMs that can’t do a single thing right. All because of marketing hype convincing billionaires they won’t need to pay people anymore.
If you want to define “failing” as unable to do everything correctly, then sure, I’d concur.
However, if you want to define “failing” as replacing people in their jobs, I’d disagree. It’s doing that, even though it’s not meeting the criteria to pass the first test.
belit_deg@lemmy.world
on 05 Aug 04:04
nextcollapse
If I was China, I would be thrilled to hear that the west are building data centres for LLMs, sucking power from the grid, and using all their attention and money on AI, rather than building better universities and industry. Just sit back and enjoy, while I can get ahead in these areas.
They’ve been ahead for the past 2 decades. Government is robbing us blind because it only serves multinational corporations or foreign governments. It does not serve the people.
vacuumflower@lemmy.sdf.org
on 05 Aug 10:48
collapse
They have a demographic pit in front of them which they themselves created with “1 child policy”.
Also CCP too doesn’t exactly serve the people. It’s a hierarchy of (possibly benevolent) bureaucrats.
I never said they were ahead on social issues. They aren’t and have never been.
Their infrastructure shits on ours. Hell look at their healthcare system.
TheGrandNagus@lemmy.world
on 07 Aug 06:44
collapse
The one child policy and the nightmare that will cause is not just a social policy.
And yes, China’s infrastructure is very very impressive, however it’s also true that when everything has been built in the past 30 years, it’s inevitably going to be a lot more efficient and modern than a country that has a lot of legacy baggage. A prime example of that is probably the UK, who are still trying to keep Victorian-era rail infrastructure working. Tearing out old stuff and replacing it is time consuming, complex, and expensive.
The question is when, not if. But it’s an expensive question to guess the “when” wrong. I believe the famous idiom is: the market can stay irrational longer than you can stay solvent.
It’s also the tech bubble ow and the pyramide scheme of the US housing sector will cause more financial issues as well and so is the whole creditcard system
Willing to take real life money bet that bubble is not going to pop despite Lemmy’s obsession here. The value is absolutely inflated but it’s definitely real value and LLMs are not going to disappear unless we create a better AI technology.
In general we’re way past the point of tech bubbles popping. Software markets move incredibly fast and are incredibly resilient to this. There literally hasn’t been a software bubble popping since dotcom boom. Prove me wrong.
Even if you see problems with LLMs and AI in general this hopeful doomerism is really not helping anyone. Now instead of spending effort on improving things people are these angry, passive, delusional accelerationists without any self awareness.
there’s an argument that this is just the targeted ads bubble that keeps inflating using different technologies. That’s where the money is coming from. It’s a game of smoke and mirrors, but this time it seems like they are betting big on a single technology for a longer time, which is different from what we have seen in the past 10 years.
EncryptKeeper@lemmy.world
on 05 Aug 12:57
nextcollapse
I mean we haven’t figured out how to make AI profitable yet, and though it’s a cool technology with real world use cases, nobody has proven yet that the juice is worth the squeeze. There’s an unimaginable amount of money tied up in a technology on the hope that one day they find a way to make it profitable and though AI as a technology “improves”, its journey towards providing more value than it costs to run is not.
If I roleplayed as somebody who desperately wanted AI to succeed, my first question would be “What is the plan to have AI make money?” And so far nobody, not even the technology’s biggest sycophants have an answer.
bridgeenjoyer@sh.itjust.works
on 05 Aug 13:58
nextcollapse
The profit of ai lies in this: mass surveillance and ads. Thats it.
EncryptKeeper@lemmy.world
on 05 Aug 16:03
collapse
The revenue of AI lies in mass surveillance and ads. But even going full dystopia, that has not been enough to make AI companies profitable.
bridgeenjoyer@sh.itjust.works
on 05 Aug 19:37
collapse
Millennials are killing the mass surveillance and advertising industry!
AI is absolutely profitable just not for everybody.
EncryptKeeper@lemmy.world
on 05 Aug 15:56
nextcollapse
AI as a technology is so far not profitable for anybody. The hardware AI runs on is profitable, as might be some start ups that are heavily leveraging AI, but actually operating AI is so far not profitable, and because increasingly smaller improvements in AI use exponentially more power, there’s no real path that is visible to any of us today that suggests anyone’s yet found a path to profitability. Aside from some kind of miracle out of left field that no one today has even conceived, the long term outlook isn’t great.
If AI as a technology busts, so does the insane profits behind the hardware it runs on. And without that left field technological breakthrough, the only option to pursue to avoid AI going completely bust is to raise prices astronomically, which would bust any companies currently dependent on all the AI currently being provided to them for basically next to nothing.
The entire industry is operating at a loss, but is being propped up by the currently abstract idea that AI will some day make money. This isn’t the “AI Hater” viewpoint, it’s just the spot AI is currently in. If you think AI is here to stay, you’re placing a bet on a promise that nobody as of today can actually make.
EncryptKeeper@lemmy.world
on 05 Aug 17:46
collapse
Delusion? Ok let’s get it straight from the horse’s mouth then. I’ve asked ChatGPT if OpenAI is profitable, and to explain its financial outlook. What you see below, emphasis and emojis, are generated by ChatGPT:
—ChatGPT—
OpenAI is not currently profitable. Despite its rapid growth, the company continues to operate at a substantial loss.
📊 Financial Snapshot
Annual recurring revenue (ARR) was reported at approximately $12 billion as of July 2025, implying around $1 billion per month in revenue.
Projected total revenue for 2025 is $12.7 billion, up from roughly $3.7 billion in 2024.
However, OpenAI’s cash burn has increased, with projected operational losses around $8 billion in 2025 alone
—end ChatGPT—
The most favorable projections are that OpenAI will not be cash positive (That means making a single dollar in profit) until it reached 129 billion dollars in revenue. That means that OPENAI has to make more than 10X their annual revenue to finally be profitable. And their current strategy to make more money is to expand their infrastructure to take on more customers and run more powerful systems. The problem is, the models require substantially more power to make moderate gains in accurate and capability. And every new AI datacenter means more land cost, engineers, water, and electricity. Compounding the issue is that the more electricity they use, the more it costs. NJ has paved the way for a number of new huge AI datacenters in the past few years and the cost of electricity in the state has skyrocketed. People are seeing their monthly electric bills raised by 50-150% in the last couple months alone. Thats forcing not only people out of their homes, but eats substantially into revenue growth for data centers. It’s quite literally a race for AI companies to reach profitability before hitting the natural limits to the resources they require to expand. And I haven’t heard a peep about how they expect to do so.
You use one company thats is spearheading the entire industry as your example that no AI company is profitable. Either you are argueing in extremely bad faith or you’re invredibly stupid I’m sorry.
EncryptKeeper@lemmy.world
on 06 Aug 03:39
collapse
Of course I used the company that is the market leader in AI as an example that AI companies are not profitable you donut, that’s how that works.
They’re not the only AI company that’s not profitable, like I said none of them are. You can take your pick if you don’t like OpenAI as an example.
Thats like saying ride hailing and food delivery is not profitable because Uber is not profitable in the US.
I work in a profitable AI company and can list you a hundred more. I’m not sure what’s the point of this delusional lie?
No startup is profitable - thats by design because profit seeking is not what makes your company successful. No wonder average American struggles financially with this poor understanding of basic financing.
Cleaely you’re argueing in bad faith and I see no point in educating you so continue to stew in your hate while everyone else grows, bye.
EncryptKeeper@lemmy.world
on 06 Aug 04:33
collapse
Thats like saying ride hailing and food delivery is not profitable because Uber is not profitable in the US.
Uber is profitable and has been for years now. They also never faced the insurmountable challenges that AI companies do today.
work in a profitable AI company and can list you a hundred more.
No you don’t and no you can’t. If you could, you would have done so by now.
No startup is profitable - thats by design because profit seeking is not what makes your company successful.
Startups generally have a plan and realistic path to profitability, unlike the AI companies of today who are not profitable, and have no concrete plan to become so. The “profit seeking” investment phase is what startups survive on until they reach profitability. But many (most) startups do not do so and go bust. The same will happen to AI if they don’t become profitable.
You may continue to live in your fantasy world based on nothing but hope and strong feelings, but you’ve failed to educate anyone here on anything besides your own ignorance. You are free to stick your fingers in your ears and your head in the sand.
Every AI software company? So much ignorance in this thread its almost impossible to respond to. Llm queries are super cheap already and very much profitable.
SwingingTheLamp@midwest.social
on 05 Aug 13:49
nextcollapse
I get the thinking here, but past bubbles (dot com, housing) were also based on things that have real value, and the bubble still popped. A bubble, definitionally, is when something is priced far above its value, and the “pop” is when prices quickly fall. It’s the fall that hurts; the asset/technology doesn’t lose its underlying value.
WhirlpoolBrewer@lemmings.world
on 05 Aug 13:57
nextcollapse
In a capitalist society, what is good or best is irrelevant. All that matters is if it makes money. AI makes no money. The $200 and $300/month plans put in rate limits because at those prices they’re losing too much money. Lets say the beak-even cost for a single request is somewhere between $1-$5 depending on the request just for the electricity, and people can barely afford food, housing, and transportation as it is. What is the business model for these LLMs going to be? A person could get a coffee today, or send a single request to an LLM? Now start thinking that they’ll need newer gpus next year. And the year after that. And after that. And the data center will need maintenance. They’re paying literally millions of dollars to individual programmers.
Maybe there is a niche market for mega corporations like Google who can afford to spend thousands of dollars a day on LLMs, but most companies won’t be able to afford these tools. Then there is the problem where if the company can afford these tools, do they even need them?
The only business model that makes sense to me is the one like BMW uses for their car seat warmers. BMW requires you to pay a monthly subscription to use the seat warmers in their cars. LLM makers could charge a monthly subscription to run a micro model on your own device. That free assistant in your Google phone would then be pay walled. That way businesses don’t need to carry the cost of the electricity, but the LLM is going to be fairly low functioning compared to what we get for free today. But the business model could work. As long as people don’t install a free version.
I don’t buy the idea that “LLMs are good so they are going to be a success”. Not as long as investors want to make money on their investments.
I believe that if something has enough value, people are willing to pay for it. And by people here I mean primarily executives. The problem is that AI has not enough value to sustain the hype.
You demand citations for this, something that has been extensively covered in the news, but also throw around arguments like “absolutely inflated but it’s definitely real value” and “we’re way past the point of tech bubbles popping”. Who is cringe here?
Americans are not all people. Its a single country that still buys out all Nintendo switches and cyrbertrucks - so maybe if americans had budgeting classes they wouldn’t take payday loans for twinkies? The conjecture here is just mind bogglingly stupid.
America has bought out cybertrucks? Nope, not even close. Dunno about switches, but since I’ve recently seen them on shelves, guessing those haven’t been bought out either though.
No, I guess I don’t, what did you mean by America has bought out cybertrucks, other than what the words mean?
WhirlpoolBrewer@lemmings.world
on 05 Aug 18:40
collapse
62% of Americans are living paycheck to paycheck. Perhaps saying most Americans are struggling is doomerism, but what percentage living paycheck to paycheck no longer counts as doomerism and is just a harsh truth? 75%? 90%? Do you think the number of people living paycheck to paycheck is increasing or decreasing this year?
bridgeenjoyer@sh.itjust.works
on 05 Aug 19:41
nextcollapse
I imagine a dystopia where the main internet has been destroyed and watered down so you can only access it through a giant corpo llm (isps will become llmsps) So you choose between watching an ai generated movie for entertainment or a coffee. Because they will destroy the internet any way they can.
Also they’ll charge more for prompts related to things you like. Its all there for the plundering, and consumers want it.
General_Effort@lemmy.world
on 06 Aug 11:21
collapse
Lets say the beak-even cost for a single request is somewhere between $1-$5 depending on the request just for the electricity,
frezik@lemmy.blahaj.zone
on 05 Aug 17:58
nextcollapse
LLMs can absolutely disappear as a mass market technology. They will always exist in some sense as long as there are computers to run them and people who care to try, but the way our economy has incorporated them is completely unsustainable. No business model has emerged that can support them, and at this point, I’m willing to say that there is no such business model without orders of magnitude gains in efficiency that may not ever happen with LLMs.
Sort of agreed. I disagree with the people around here acting like AI will crash and burn, never to be seen again. It’s here to stay.
I do think this is a bubble and will pop hard. Too many players in the game, most are going to lose, but the survivors will be rich and powerful beyond imagining.
Tollana1234567@lemmy.today
on 06 Aug 05:17
collapse
the fallout is they laid off to many tech workers to recover from thier companies financially.
GamingChairModel@lemmy.world
on 05 Aug 19:56
nextcollapse
The value a thing creates is only part of whether the investment into it is worth it.
It’s entirely possible that all of the money that is going into the AI bubble will create value that will ultimately benefit someone else, and that those who initially invested in it will have nothing to show for it.
In the late 90’s, U.S. regulatory reform around telecom prepared everyone for an explosion of investment in hard infrastructure assets around telecommunications: cell phones were starting to become a thing, consumer internet held a ton of promise. So telecom companies started digging trenches and laying fiber, at enormous expense to themselves. Most ended up in bankruptcy, and the actual assets eventually became owned by those who later bought those assets for pennies on the dollar, in bankruptcy auctions.
Some companies owned fiber routes that they didn’t even bother using, and in the early 2000’s there was a shitload of dark fiber scattered throughout the United States. Eventually the bandwidth needs of near universal broadband gave that old fiber some use. But the companies that built it had already collapsed.
If today’s AI companies can’t actually turn a profit, they’re going to be forced to sell off their expensive data at some point. Maybe someone else can make money with it. But the life cycle of this tech is much shorter than the telecom infrastructure I was describing earlier, so a stale LLM might very well become worthless within years. Or it’s only a stepping stone towards a distilled model that costs a fraction to run.
So as an investment case, I’m not seeing a compelling case for investing in AI today. Even if you agree that it will provide value, it doesn’t make sense to invest $10 to get $1 of value.
Tollana1234567@lemmy.today
on 06 Aug 05:16
collapse
dint microsoft already admitted thier AI isnt profitable, i suspect thats why they have been laying off in waves. they are hoping govt contracts will stem the bleeding or hold them off, and they found the sucker who will just do it, trump. I wonder if palintir is suffeing too, surely thier AI isnt as useful to the military as they claim.
Dotcom was a bubble too and it popped hard with huge faillout even though the internet didn’t disappear and it still was and is a revolutionary thing that changed how we live our lives.
Everyone knows a bubble is a firm foundation to build upon. Now that Trump is back in office and all our American factories are busy cranking out domestic products I can finally be excited about the future again!
I predict that in a year this bubble will be at least twice as big!
Plus everyone else that pays taxes as they will have to continue to pay for unemployment insurance, food stamps, rent assistance, etc (not the CEOs and execs that caused it that’s for sure).
Tollana1234567@lemmy.today
on 06 Aug 05:13
nextcollapse
And that’s why it’s being done. Everyone hopes that they make it out at just the right time to make millions while the greater fools who join too late are left holding the bag.
Bubbles are great. For those who make it out in time. They suck fo everyone else including the taxpayer who might have to bail out companies and investors.
Always following the doctrine of privatizing profits and socializing losses.
belit_deg@lemmy.world
on 06 Aug 05:05
nextcollapse
I get that people who sell AI-services wants to promote it. That part is obvious.
What I don’t get is how gullible the rest of society at large is. Take the norwegian digitalization minister, who says that 80% of the public sector shall use AI. Whatever that means.
Jared Diamond had a great take on this in “Collapse”. That there a countless examples of societies making awful decisions - because the decisionmakers are insulated from the consequences. On the contrary, they get short term gains.
We know that our current way of economic growth and consistent new “inventions” is destroying the basis of our life. We know that the only way to stop is to fundamentally redesign the social system, moving away from capitalism, growth economics and ever new gadgets.
But facing this is difficult. Facing this and winning elections with it is even more difficult. Instead claiming there is some wonder technology that will safe us all and putting the eggs in that basket is much easier. It will fail inevitably, but until then it is easier.
Tollana1234567@lemmy.today
on 06 Aug 05:14
nextcollapse
the ceos, C-SUITES and some people trying to get into CS field are the one that believe in it. i know a person who already has a degree, and sitll think its wise to pursue a GRAD degree in the field adjacent or directly with AI or close to it.
A grad course in AI/LLM/ML might actually be useful. Its where my old roommates learned about Googles Transformers and got into LLMs before the hype bubble in 2018.
Home might get ahead of the curve for the next over inflated hype bubble and then proceed to make unearned garbage loads of money and have learned something other than how to put ChatGPT in a new wrapper.
You don’t believe in the quantum block chain 3D printed AI cloud future mining asteroids for the private Mars colony (yet with no life extension)?
Luddite.
vacuumflower@lemmy.sdf.org
on 06 Aug 11:49
collapse
Quantum was popular as “oh god, our cryptography will die, what are we going to do”. Now post-quantum cryptography exists and it doesn’t seem to be clear what else quantum computers are useful for, other than PR.
Blockchain was popular when the supply of cryptocurrencies was kinda small, now there’s too many of them. And also its actually useful applications require having offline power to make decisions. Go on, tell politicians in any country that you want electoral system exposed and blockchain-based to avoid falsifications. LOL. They are not stupid. If you have a safe electoral system, you can do with much more direct democracy. Except blockchain seems a bit of an overkill for it.
3D printing is still kinda cool, except it’s just one tool among others. It’s widely used to prototype combat drones and their ammunition. The future is here, you just don’t see it.
Cloud - well, bandwidths allowed for it and it’s good for companies, so they advertised it. Except even in the richest countries Internet connectivity is not a given, and at some point wow-effect is defeated by convenience. It’s just less convenient to use cloud stuff, except for things which don’t make sense without cloud stuff. Like temporary collaboration on a shared document.
“AI” - they’ve ran out of stupid things to do with computers, so they are now promising the ultimate stupid thing. They don’t want smart things, smart things are smart because they change the world, killing monopolies and oligopolies along the way.
No room-temperature superconductor fusion reactors, space-based solar, or private space mining? Luddite.
vacuumflower@lemmy.sdf.org
on 06 Aug 12:35
collapse
#1 is like tactical nuke tech available for all civilians, #2 would make sense if all the production line and consumers are in space too, #3 would make sense as part of the same.
Earth gravity well is a bitch. We live in it. Sending stuff up is expensive, sending stuff down is stupid when it’s needed up there, but without some critical complete piece of civilization to send up at once, you’ll have to send stuff up all the time.
It’s too expensive and the profits are transcendent, as in “ideological achievement and because we can”. Also they may eventually start sending nukes down.
Thus it all makes sense only when we can build and equip an autonomous colony to send at once. Self-reliant with the condition that they will get needed materials from wherever they are sent.
I suggest something with gravity though. Europa or Ganymede or Enceladus. Something like that.
Yes? Not the principles behind them, but our understanding of them as a species.
You’re a boring doomer who thinks humans will never find, create, or invent something we’ve never done before? Seriously? What kind of boring hill is that to die on?
Nope, you’re right. We know everything there is to ever know and nothing will ever change. We’ve peaked as a species, there is literally nowhere else to go from here.
vacuumflower@lemmy.sdf.org
on 06 Aug 16:45
collapse
I disagree. It just won’t be fancy. It has to be an enormous project with existential risks. And you have to really send many people at once with no return ticket. “At once” is important, you can’t ramp it up, that’s far more expensive. It has to be a mission very deeply planned in detail with plenty of failsafe paths, aimed at building a colony that can be maintained with Earth’s teaching resources, technologies and expertise, and locally produced and processed materials for everything. So - something like that won’t happen anytime soon, but at some point it will happen.
The technologies necessary have to be perfected first, computing should stop being the main tool for hype, and the societies should adapt culturally for computing and worldwide connectivity.
These take centuries. In those centuries we’ll be busy with plenty of things existential, like avoiding the planet turning into one big 70s Cambodia.
threaded - newest
propo dabogda
Your kids will enjoy their new Zombie Twitter AI teacher with fabulous lesson plans like, “Was the Holocaust real or just a hoax?”
I didn’t have the US becoming a banana republic on my bingo card tbf
why not
Yeah ten years seems like plenty of notice
Ooowee, they are setting up the US for a major bust aren’t they. I guess all the wealthy people will just have to buy up everything when it becomes dirt cheap. Sucks to have to own everything I guess.
Open models are going to kick the stool out. Hopefully.
GLM 4.5 is already #2 on lm arena, above Grok and ChatGPT, and runnable on homelab rigs, yet just 32B active (which is mad). Extrapolate that a bit, and it’s just a race to the zero-cost bottom. None of this is sustainable.
I did not understand half of what you’ve written. But what do I need to get this running on my home PC?
You can probably just use ollama and import the model.
I’m running Qwen 3B and it is seldom useful
It’s too small.
IDK what your platform is, but have you tried Qwen3 A3B? Or smallthinker 21B?
huggingface.co/…/SmallThinker-21BA3B-Instruct
The speed should be somewhat similar.
Qwen3 8B sorry, Idiot spelling. I use it to talk about problems when I have no internet or maxed out on Claude. I can rarely trust it with anything reasoning related, it’s faster and easier to do most things myself.
Yeah, 7B models are just not quite there.
There are tons of places to get free access to bigger models. I’d suggest Jamba, Kimi, Deepseek Chat, and Google AI Studio, and the new GLM chat app: chat.z.ai
And depending on your hardware, you can probably run better MoEs at the speed of 8Bs. Qwen3 30B is so much smarter its not even funny, and faster on CPU.
It’s going to be slow as molasses on ollama. It needs a better runtime, and GLM 4.5 probably isn’t supported at this moment anyway.
I am referencing this: z.ai/blog/glm-4.5
The full GLM? Basically a 3090 or 4090 and a budget EPYC CPU. Or maybe 2 GPUs on a threadripper system.
GLM Air? Now this would work on a 16GB+ VRAM desktop, just slap in 96GB+ (maybe 64GB?) of fast RAM. Or the recent Framework desktop, or any mini PC/laptop with the 128GB Ryzen 395 config, or a 128GB+ Mac.
You’d download the weights, quantize yourself if needed, and run them in ik_llama.cpp (which should get support imminently).
github.com/ikawrakow/ik_llama.cpp/
But these are…not lightweight models. If you don’t want a homelab, there are better ones that will fit on more typical hardware configs.
As someone who works with integrating AI- it’s failing badly.
At best, it’s good for transcription- at least until it hallucinates and adds things to your medical record that don’t exist. Which it does and when the providers don’t check for errors - which few do regularly- congrats- you now have a medical record of whatever it hallucinated today.
And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.
They can’t consistently do anything more complex without making errors- and most people are frankly too dumb or lazy to properly verify outputs. And that’s why this bubble is so huge.
It is going to pop, messily.
This is my main argument. I need to check the output for correctness anyways. Might as well do it in the first place then.
People are happy to accept the wrong answer without even checking lol
This is exactly why I love duckduckgo’s AI results built in to search. It appears when it is relevant (and yes you can nuke it from orbit so it never ever appears) and it always gives citations (2 websites) so I can go check if it is right or not. Sometimes it works wonders when regular search results are not relevant. Sometimes it fails hard. I can distinguish one from the other because I can always check the sources.
Honestly I mostly use it as a jumping off point for my code or to help me sound more coherent when writing emails.
This 1 million%.
The fact that coding is a big corner of the use cases means that the tech sector is essentially high on their own supply.
Summarizing and aggregating data alone isn’t a substitute for the smoke and mirrors of confidence that is a consulting firm. It just makes the ones that can lean on branding able to charge more hours for the same output, and add “integrating AI” another bucket of vomit to fling.
insurance companies, oh no, insurance companies !!! AArrrggghhh !!!
I tried having it identify an unknown integrated circuit. It hallucinated a chip. It kept giving me non-existent datasheets and 404 links to digikey/mouser/etc.
Well, from this description it’s still usable for things too complex to just do Monte-Carlo, but with possible verification of results. May even be efficient. But that seems narrow.
BTW, even ethical automated combat drones. I know that one word there seems out of place, but if we have an “AI” for target\trajectory\action suggestion, but something more complex\expensive for verification, ultimately with a human in charge, then it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).
But how does this work help next quarter’s profits?
If each unplanned death not result of operator’s mistake would lead to confiscation of one month’s profit (not margin), then I’d think it would help very much.
As someone who is actually an AI tool developer (I just use existing models) - it’s absolutely NOT failing.
Lemmy is ironically incredibly tech illiterate.
It can be working and good and still be a bubble - you know that right? A lot of AI is overvalued but to say it’s “failing badly” is absurd and really helps absolutely no one.
I disagree with all these self hosting Linux running passionate open source advocates, so they must be technology illiterate.
According to whom? No one’s running their instance here. I’m a software dev with over 20 years of foss experience and imo lemmy’s user base is somewhat illiterate bunch of contrarians when it comes to popular tech discussions.
We’re clearly not going to agree here without objective data so unless you’re willing to provide that have a good day, bye.
speak for yourself
This is what drives nuts the most about it. We had so many incredibly efficient, purpose-built tools using the same technologies (machine learning and neural networks) and we threw them away in favor of wildly inefficient, general-purpose LLMs that can’t do a single thing right. All because of marketing hype convincing billionaires they won’t need to pay people anymore.
If you want to define “failing” as unable to do everything correctly, then sure, I’d concur.
However, if you want to define “failing” as replacing people in their jobs, I’d disagree. It’s doing that, even though it’s not meeting the criteria to pass the first test.
If I was China, I would be thrilled to hear that the west are building data centres for LLMs, sucking power from the grid, and using all their attention and money on AI, rather than building better universities and industry. Just sit back and enjoy, while I can get ahead in these areas.
They’ve been ahead for the past 2 decades. Government is robbing us blind because it only serves multinational corporations or foreign governments. It does not serve the people.
They have a demographic pit in front of them which they themselves created with “1 child policy”.
Also CCP too doesn’t exactly serve the people. It’s a hierarchy of (possibly benevolent) bureaucrats.
I never said they were ahead on social issues. They aren’t and have never been. Their infrastructure shits on ours. Hell look at their healthcare system.
The one child policy and the nightmare that will cause is not just a social policy.
And yes, China’s infrastructure is very very impressive, however it’s also true that when everything has been built in the past 30 years, it’s inevitably going to be a lot more efficient and modern than a country that has a lot of legacy baggage. A prime example of that is probably the UK, who are still trying to keep Victorian-era rail infrastructure working. Tearing out old stuff and replacing it is time consuming, complex, and expensive.
Alright, you act like China isn’t older than the UK or the US.
It’s older than both combined.
Yeah, but they only industrialised very recently
never interrupt your enemy while he is making a big mistake
SSSSIIIIIIIGGGGGGHHHHHHHHHHH…
Looks like I’ll have to prepare for yet another once-in-a-lifetime economic collapse.
How many does this make now, 6? 7? I lost track after Covid broke my understanding of time and space.
Covid really fucked everything up. My sense of time and recent-ish history went to shit.
So what sound should it make when this bubble pops?
So is it smart to short on the ai bubble ? 👉👈
The question is when, not if. But it’s an expensive question to guess the “when” wrong. I believe the famous idiom is: the market can stay irrational longer than you can stay solvent.
Best of luck!
The market can remain irrational longer than you can remain solvent.
Yup. If you have money you can AFFORD TO BURN, go ahead and short to your heart’s content. Otherwise, stay clear and hedge your bets.
Not only the tech bubble is doing that.
It’s also the tech bubble ow and the pyramide scheme of the US housing sector will cause more financial issues as well and so is the whole creditcard system
Willing to take real life money bet that bubble is not going to pop despite Lemmy’s obsession here. The value is absolutely inflated but it’s definitely real value and LLMs are not going to disappear unless we create a better AI technology.
In general we’re way past the point of tech bubbles popping. Software markets move incredibly fast and are incredibly resilient to this. There literally hasn’t been a software bubble popping since dotcom boom. Prove me wrong.
Even if you see problems with LLMs and AI in general this hopeful doomerism is really not helping anyone. Now instead of spending effort on improving things people are these angry, passive, delusional accelerationists without any self awareness.
there’s an argument that this is just the targeted ads bubble that keeps inflating using different technologies. That’s where the money is coming from. It’s a game of smoke and mirrors, but this time it seems like they are betting big on a single technology for a longer time, which is different from what we have seen in the past 10 years.
I mean we haven’t figured out how to make AI profitable yet, and though it’s a cool technology with real world use cases, nobody has proven yet that the juice is worth the squeeze. There’s an unimaginable amount of money tied up in a technology on the hope that one day they find a way to make it profitable and though AI as a technology “improves”, its journey towards providing more value than it costs to run is not.
If I roleplayed as somebody who desperately wanted AI to succeed, my first question would be “What is the plan to have AI make money?” And so far nobody, not even the technology’s biggest sycophants have an answer.
The profit of ai lies in this: mass surveillance and ads. Thats it.
The revenue of AI lies in mass surveillance and ads. But even going full dystopia, that has not been enough to make AI companies profitable.
Millennials are killing the mass surveillance and advertising industry!
AI is absolutely profitable just not for everybody.
AI as a technology is so far not profitable for anybody. The hardware AI runs on is profitable, as might be some start ups that are heavily leveraging AI, but actually operating AI is so far not profitable, and because increasingly smaller improvements in AI use exponentially more power, there’s no real path that is visible to any of us today that suggests anyone’s yet found a path to profitability. Aside from some kind of miracle out of left field that no one today has even conceived, the long term outlook isn’t great.
If AI as a technology busts, so does the insane profits behind the hardware it runs on. And without that left field technological breakthrough, the only option to pursue to avoid AI going completely bust is to raise prices astronomically, which would bust any companies currently dependent on all the AI currently being provided to them for basically next to nothing.
The entire industry is operating at a loss, but is being propped up by the currently abstract idea that AI will some day make money. This isn’t the “AI Hater” viewpoint, it’s just the spot AI is currently in. If you think AI is here to stay, you’re placing a bet on a promise that nobody as of today can actually make.
Absolute delusion right here
Delusion? Ok let’s get it straight from the horse’s mouth then. I’ve asked ChatGPT if OpenAI is profitable, and to explain its financial outlook. What you see below, emphasis and emojis, are generated by ChatGPT:
—ChatGPT—
OpenAI is not currently profitable. Despite its rapid growth, the company continues to operate at a substantial loss.
📊 Financial Snapshot
Annual recurring revenue (ARR) was reported at approximately $12 billion as of July 2025, implying around $1 billion per month in revenue.
Projected total revenue for 2025 is $12.7 billion, up from roughly $3.7 billion in 2024.
However, OpenAI’s cash burn has increased, with projected operational losses around $8 billion in 2025 alone
—end ChatGPT—
The most favorable projections are that OpenAI will not be cash positive (That means making a single dollar in profit) until it reached 129 billion dollars in revenue. That means that OPENAI has to make more than 10X their annual revenue to finally be profitable. And their current strategy to make more money is to expand their infrastructure to take on more customers and run more powerful systems. The problem is, the models require substantially more power to make moderate gains in accurate and capability. And every new AI datacenter means more land cost, engineers, water, and electricity. Compounding the issue is that the more electricity they use, the more it costs. NJ has paved the way for a number of new huge AI datacenters in the past few years and the cost of electricity in the state has skyrocketed. People are seeing their monthly electric bills raised by 50-150% in the last couple months alone. Thats forcing not only people out of their homes, but eats substantially into revenue growth for data centers. It’s quite literally a race for AI companies to reach profitability before hitting the natural limits to the resources they require to expand. And I haven’t heard a peep about how they expect to do so.
You use one company thats is spearheading the entire industry as your example that no AI company is profitable. Either you are argueing in extremely bad faith or you’re invredibly stupid I’m sorry.
Of course I used the company that is the market leader in AI as an example that AI companies are not profitable you donut, that’s how that works.
They’re not the only AI company that’s not profitable, like I said none of them are. You can take your pick if you don’t like OpenAI as an example.
Thats like saying ride hailing and food delivery is not profitable because Uber is not profitable in the US.
I work in a profitable AI company and can list you a hundred more. I’m not sure what’s the point of this delusional lie?
No startup is profitable - thats by design because profit seeking is not what makes your company successful. No wonder average American struggles financially with this poor understanding of basic financing.
Cleaely you’re argueing in bad faith and I see no point in educating you so continue to stew in your hate while everyone else grows, bye.
Uber is profitable and has been for years now. They also never faced the insurmountable challenges that AI companies do today.
No you don’t and no you can’t. If you could, you would have done so by now.
Startups generally have a plan and realistic path to profitability, unlike the AI companies of today who are not profitable, and have no concrete plan to become so. The “profit seeking” investment phase is what startups survive on until they reach profitability. But many (most) startups do not do so and go bust. The same will happen to AI if they don’t become profitable.
You may continue to live in your fantasy world based on nothing but hope and strong feelings, but you’ve failed to educate anyone here on anything besides your own ignorance. You are free to stick your fingers in your ears and your head in the sand.
Who is it profitable for right now? The only ones I see are the ones selling shovels in a gold rush, like Nvidia.
Every AI software company? So much ignorance in this thread its almost impossible to respond to. Llm queries are super cheap already and very much profitable.
I get the thinking here, but past bubbles (dot com, housing) were also based on things that have real value, and the bubble still popped. A bubble, definitionally, is when something is priced far above its value, and the “pop” is when prices quickly fall. It’s the fall that hurts; the asset/technology doesn’t lose its underlying value.
In a capitalist society, what is good or best is irrelevant. All that matters is if it makes money. AI makes no money. The $200 and $300/month plans put in rate limits because at those prices they’re losing too much money. Lets say the beak-even cost for a single request is somewhere between $1-$5 depending on the request just for the electricity, and people can barely afford food, housing, and transportation as it is. What is the business model for these LLMs going to be? A person could get a coffee today, or send a single request to an LLM? Now start thinking that they’ll need newer gpus next year. And the year after that. And after that. And the data center will need maintenance. They’re paying literally millions of dollars to individual programmers.
Maybe there is a niche market for mega corporations like Google who can afford to spend thousands of dollars a day on LLMs, but most companies won’t be able to afford these tools. Then there is the problem where if the company can afford these tools, do they even need them?
The only business model that makes sense to me is the one like BMW uses for their car seat warmers. BMW requires you to pay a monthly subscription to use the seat warmers in their cars. LLM makers could charge a monthly subscription to run a micro model on your own device. That free assistant in your Google phone would then be pay walled. That way businesses don’t need to carry the cost of the electricity, but the LLM is going to be fairly low functioning compared to what we get for free today. But the business model could work. As long as people don’t install a free version.
I don’t buy the idea that “LLMs are good so they are going to be a success”. Not as long as investors want to make money on their investments.
I believe that if something has enough value, people are willing to pay for it. And by people here I mean primarily executives. The problem is that AI has not enough value to sustain the hype.
Citation needed. The doomerism in this thread is so cringe.
People are increasingly taking out loans to buy groceries. Nobody does that if they have a better choice.
You demand citations for this, something that has been extensively covered in the news, but also throw around arguments like “absolutely inflated but it’s definitely real value” and “we’re way past the point of tech bubbles popping”. Who is cringe here?
Americans are not all people. Its a single country that still buys out all Nintendo switches and cyrbertrucks - so maybe if americans had budgeting classes they wouldn’t take payday loans for twinkies? The conjecture here is just mind bogglingly stupid.
Ha, rich
Did I say something untrue?
America has bought out cybertrucks? Nope, not even close. Dunno about switches, but since I’ve recently seen them on shelves, guessing those haven’t been bought out either though.
You know what I meant but sure whatever makes you feel better mate
No, I guess I don’t, what did you mean by America has bought out cybertrucks, other than what the words mean?
62% of Americans are living paycheck to paycheck. Perhaps saying most Americans are struggling is doomerism, but what percentage living paycheck to paycheck no longer counts as doomerism and is just a harsh truth? 75%? 90%? Do you think the number of people living paycheck to paycheck is increasing or decreasing this year?
econofact.org/…/is-there-a-consensus-that-a-major….
You know the world is not america right?
.
I imagine a dystopia where the main internet has been destroyed and watered down so you can only access it through a giant corpo llm (isps will become llmsps) So you choose between watching an ai generated movie for entertainment or a coffee. Because they will destroy the internet any way they can.
Also they’ll charge more for prompts related to things you like. Its all there for the plundering, and consumers want it.
Are you baiting the fine people here?
Proof: Agentic AI is worthless.
LLMs can absolutely disappear as a mass market technology. They will always exist in some sense as long as there are computers to run them and people who care to try, but the way our economy has incorporated them is completely unsustainable. No business model has emerged that can support them, and at this point, I’m willing to say that there is no such business model without orders of magnitude gains in efficiency that may not ever happen with LLMs.
Sort of agreed. I disagree with the people around here acting like AI will crash and burn, never to be seen again. It’s here to stay.
I do think this is a bubble and will pop hard. Too many players in the game, most are going to lose, but the survivors will be rich and powerful beyond imagining.
the fallout is they laid off to many tech workers to recover from thier companies financially.
The value a thing creates is only part of whether the investment into it is worth it.
It’s entirely possible that all of the money that is going into the AI bubble will create value that will ultimately benefit someone else, and that those who initially invested in it will have nothing to show for it.
In the late 90’s, U.S. regulatory reform around telecom prepared everyone for an explosion of investment in hard infrastructure assets around telecommunications: cell phones were starting to become a thing, consumer internet held a ton of promise. So telecom companies started digging trenches and laying fiber, at enormous expense to themselves. Most ended up in bankruptcy, and the actual assets eventually became owned by those who later bought those assets for pennies on the dollar, in bankruptcy auctions.
Some companies owned fiber routes that they didn’t even bother using, and in the early 2000’s there was a shitload of dark fiber scattered throughout the United States. Eventually the bandwidth needs of near universal broadband gave that old fiber some use. But the companies that built it had already collapsed.
If today’s AI companies can’t actually turn a profit, they’re going to be forced to sell off their expensive data at some point. Maybe someone else can make money with it. But the life cycle of this tech is much shorter than the telecom infrastructure I was describing earlier, so a stale LLM might very well become worthless within years. Or it’s only a stepping stone towards a distilled model that costs a fraction to run.
So as an investment case, I’m not seeing a compelling case for investing in AI today. Even if you agree that it will provide value, it doesn’t make sense to invest $10 to get $1 of value.
dint microsoft already admitted thier AI isnt profitable, i suspect thats why they have been laying off in waves. they are hoping govt contracts will stem the bleeding or hold them off, and they found the sucker who will just do it, trump. I wonder if palintir is suffeing too, surely thier AI isnt as useful to the military as they claim.
Dotcom was a bubble too and it popped hard with huge faillout even though the internet didn’t disappear and it still was and is a revolutionary thing that changed how we live our lives.
Overvalued doesn’t mean the thing has no value.
Everyone knows a bubble is a firm foundation to build upon. Now that Trump is back in office and all our American factories are busy cranking out domestic products I can finally be excited about the future again!
I predict that in a year this bubble will be at least twice as big!
If the bubble is on top of sand it can support anything.
When this puppy pops it’s gonna splatter all of us with chunky bits.
I feel like literally everybody knew it was a bubble when it started expanding and everyone just kept pumping into it.
How many tech bubbles do we have to go through before we leave our lesson?
what lesson? it’s a ponzi scheme and whoever is the last holding the bag is the only one losing.
Plus everyone else that pays taxes as they will have to continue to pay for unemployment insurance, food stamps, rent assistance, etc (not the CEOs and execs that caused it that’s for sure).
its the NEW CRYPTO hype basically
And that’s why it’s being done. Everyone hopes that they make it out at just the right time to make millions while the greater fools who join too late are left holding the bag.
Bubbles are great. For those who make it out in time. They suck fo everyone else including the taxpayer who might have to bail out companies and investors.
Always following the doctrine of privatizing profits and socializing losses.
I get that people who sell AI-services wants to promote it. That part is obvious.
What I don’t get is how gullible the rest of society at large is. Take the norwegian digitalization minister, who says that 80% of the public sector shall use AI. Whatever that means.
Or building a gigantic fuckoff openai data centre, instead of new industry openai.com/nb-NO/…/introducing-stargate-norway/
Jared Diamond had a great take on this in “Collapse”. That there a countless examples of societies making awful decisions - because the decisionmakers are insulated from the consequences. On the contrary, they get short term gains.
We know that our current way of economic growth and consistent new “inventions” is destroying the basis of our life. We know that the only way to stop is to fundamentally redesign the social system, moving away from capitalism, growth economics and ever new gadgets.
But facing this is difficult. Facing this and winning elections with it is even more difficult. Instead claiming there is some wonder technology that will safe us all and putting the eggs in that basket is much easier. It will fail inevitably, but until then it is easier.
the ceos, C-SUITES and some people trying to get into CS field are the one that believe in it. i know a person who already has a degree, and sitll think its wise to pursue a GRAD degree in the field adjacent or directly with AI or close to it.
A grad course in AI/LLM/ML might actually be useful. Its where my old roommates learned about Googles Transformers and got into LLMs before the hype bubble in 2018.
Home might get ahead of the curve for the next over inflated hype bubble and then proceed to make unearned garbage loads of money and have learned something other than how to put ChatGPT in a new wrapper.
Never. Some people think the universe owes us Star Trek and are just waiting for something new to happen.
It’s going to be great when the AI hype bubble crashes
You don’t believe in the quantum block chain 3D printed AI cloud future mining asteroids for the private Mars colony (yet with no life extension)?
Luddite.
Quantum was popular as “oh god, our cryptography will die, what are we going to do”. Now post-quantum cryptography exists and it doesn’t seem to be clear what else quantum computers are useful for, other than PR.
Blockchain was popular when the supply of cryptocurrencies was kinda small, now there’s too many of them. And also its actually useful applications require having offline power to make decisions. Go on, tell politicians in any country that you want electoral system exposed and blockchain-based to avoid falsifications. LOL. They are not stupid. If you have a safe electoral system, you can do with much more direct democracy. Except blockchain seems a bit of an overkill for it.
3D printing is still kinda cool, except it’s just one tool among others. It’s widely used to prototype combat drones and their ammunition. The future is here, you just don’t see it.
Cloud - well, bandwidths allowed for it and it’s good for companies, so they advertised it. Except even in the richest countries Internet connectivity is not a given, and at some point wow-effect is defeated by convenience. It’s just less convenient to use cloud stuff, except for things which don’t make sense without cloud stuff. Like temporary collaboration on a shared document.
“AI” - they’ve ran out of stupid things to do with computers, so they are now promising the ultimate stupid thing. They don’t want smart things, smart things are smart because they change the world, killing monopolies and oligopolies along the way.
No room-temperature superconductor fusion reactors, space-based solar, or private space mining? Luddite.
#1 is like tactical nuke tech available for all civilians, #2 would make sense if all the production line and consumers are in space too, #3 would make sense as part of the same.
Earth gravity well is a bitch. We live in it. Sending stuff up is expensive, sending stuff down is stupid when it’s needed up there, but without some critical complete piece of civilization to send up at once, you’ll have to send stuff up all the time.
It’s too expensive and the profits are transcendent, as in “ideological achievement and because we can”. Also they may eventually start sending nukes down.
Thus it all makes sense only when we can build and equip an autonomous colony to send at once. Self-reliant with the condition that they will get needed materials from wherever they are sent.
I suggest something with gravity though. Europa or Ganymede or Enceladus. Something like that.
Are you a Space Nutter?
It’s not going to happen. No one is going to move to space or send nukes down or mine asteroids.
Ever.
Are you a round earth nutter?
It’s not going to happen. No one is going to get past the edge of the world or sail the whole world or find new land.
Ever.
If you don’t see how that’s a completely dumb comparison, this is hopeless. I’m reality-based, you are not.
Sure, friend. You can see reality thousands of years into the future and know exactly what happens.
My bad.
Do you think physics and chemistry have changed in some significant way over the last thousand years?
Yet somehow, YOU can see reality in a thousand years, and it matches the sci-fi mindrot you watched as a kid…
Yes? Not the principles behind them, but our understanding of them as a species.
You’re a boring doomer who thinks humans will never find, create, or invent something we’ve never done before? Seriously? What kind of boring hill is that to die on?
Uh, it’s called “reality” my friend, try it.
You can’t “invent” your way out of fundamental physical limits.
A Boeing 747 looks the same in 1969 as it does today. It still flies over the Atlantic in six hours burning kerosene in turbofan engines.
Sure, you can get a few percent here, a few percent there, but do you think suddenly we’ll have warp drive?
Come on. Do you know how empty and huge space is?
Nope, you’re right. We know everything there is to ever know and nothing will ever change. We’ve peaked as a species, there is literally nowhere else to go from here.
Oh OK, the only logical counterpoint is we’re going to space.
Wheeee!!! Dibs on Neptune!
In practice my comment means that it’s far too early to think of space colonization.
Far too late as well. It will never happen.
I disagree. It just won’t be fancy. It has to be an enormous project with existential risks. And you have to really send many people at once with no return ticket. “At once” is important, you can’t ramp it up, that’s far more expensive. It has to be a mission very deeply planned in detail with plenty of failsafe paths, aimed at building a colony that can be maintained with Earth’s teaching resources, technologies and expertise, and locally produced and processed materials for everything. So - something like that won’t happen anytime soon, but at some point it will happen.
The technologies necessary have to be perfected first, computing should stop being the main tool for hype, and the societies should adapt culturally for computing and worldwide connectivity.
These take centuries. In those centuries we’ll be busy with plenty of things existential, like avoiding the planet turning into one big 70s Cambodia.
Um, OK.
Well that’s a lot of words that I wasted time reading.
Too bad
Quantum computing has incredible value as a scientific tool, what are you talking about.
OK, sorry.
Soon to lose the r from propping.
youtu.be/tZlBNFvNjr8?t=56