from throws_lemy@lemmy.nz to tech@programming.dev on 17 Jul 17:36
https://lemmy.nz/post/25665276
Back in 1999, Wall Street lost its collective mind over the internet. Companies with no revenue were suddenly worth billions, “eyeballs” were treated as currency, and market analysts predicted a frictionless future where everything would be digital. Then the bubble burst. Between March 2000 and October 2002, an estimated five trillion dollars in market value vanished into thin air.
Today, it is happening again. This time, the magic word is not “.com.” It is “AI.” < According to Torsten Slok, the influential chief economist at Apollo Global Management, a major global investment firm, the current AI driven market bubble is even more stretched than the dot com frenzy of the late 1990s. And he has the data to prove it.
“The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s,” Slok wrote in a recent research note that was widely shared across social media and financial circles.
threaded - newest
OpenAI lost like 5 BILLION dollars last year. With a ‘B’. There is no way all these AI companies will ever see an ROI. Somebody (or more likely a lot of somebodies) will get left holding the bag.
They actually learned their lesson from the dot com bust. Yeah, there are mostly going to be losers, but the handful that come out on top are going to absolutely dominate. Nobody wants to risk losing out on being the next FaceBook or Google.
That’s why they are putting AI into every fucking thing. They want to get you hooked on it so, maybe, they can have a business.
and customers are just out here like “naw”
I realized a while back that one of the primary goals of these LLMs is to get people to continue using them. While that’s not especially notable - the same could be said of many consumer products and services - the way in which this manifests in LLMs is pretty heinous.
This need for continued use is why, for example, Google’s AI was returning absolute nonsense when asked about the origins of fictitious idioms. These models are designed to return something, and to make that something pleasing to the reader, truth and utility be damned. As long as the user thinks that they’re getting what they wanted, it’s mission accomplished.
On this topic, this podcast episode is very interesting:
techwontsave.us/…/282_chatbots_are_repeating_soci…
Apparently patched. I just tried this out:
This hits another problem - I know the idiom doesn’t exist, because I made it up. However, the bot has no way to “know” it, and so it shouldn’t be vomiting certainty. (Or rather, what a human would interpret as certainty.)
It’s not OpenAI putting their AI into products though. It’s other companies and CEOs.
Yeah the entire market is wild right now.
This has been said all along. And I’d wager a lot of investors agree. But the stock market is essentially gambling and you can’t argue with market trends. Even the critics on Wall street will ride the wave until it comes crashing down in the hope that they can cash out quick enough or they hope to catch the coattails of what few firms make it out the other side.
our economic system is wholly underpinned by logical fallacies
Our retirements are wagered at this table.
I know the moment I try investing in anything, it’ll all come crashing down. You all should be paying me to not invest tbh
That’s a catch-22 because the moment we invest in your non-investments, that will also cause it
Just short yourself
Apple as an ai company? Right….
The key difference is that the internet is a fuckload more useful than what’s being sold as AI.