AI adoption rate is declining among large companies — US Census Bureau claims fewer businesses are using AI tools (www.tomshardware.com)
from mesamunefire@piefed.social to technology@lemmy.world on 09 Sep 03:29
https://piefed.social/post/1243187

A new survey conducted by the U.S. Census Bureau and reported on by Apolloseems to show that large companies may be tapping the brakes on AI. Large companies (defined as having more than 250 employees) have reduced their AI usage, according to the data (click to expand the Tweet below). The slowdown started in June, when it was at roughly 13.5%, slipping to about 12% at the end of August. Most other lines, representing companies with fewer employees, are also at a decline, with some still increasing.

#technology

threaded - newest

underline960@sh.itjust.works on 09 Sep 03:43 next collapse

13.5%, slipping to about 12%

I know that 1.5% could mean hundreds of businesses, but this still seems like such a nothing burger.

Truscape@lemmy.blahaj.zone on 09 Sep 03:53 next collapse

The issue isn’t the percentage, it’s that inverse of growth. Most investors desire growth to see returns on investment for their upfront capital. If growth isn’t occurring, that’s a good sign to read the room and pull your funding.

Similar issues occurred with streaming services. Netflix is still profitable, but because the userbase isn’t growing, investors and the financial world stopped seeing it as a valuable platform to invest in.

sexy_peach@feddit.org on 09 Sep 05:00 next collapse

The ai companies haven’t even found a viable business model yet, are bleeding money while the user base is shrinking

shalafi@lemmy.world on 09 Sep 05:45 next collapse

The lack of business model is what’s freaking me out.

Around 2003 I was talking to a customer about Google going public and saying he should go all in.

“Meh, they’re a great search engine, but I can’t see how they’ll make any money.”

Still remember that conversation, standing in his attic, wiring his new satellite dish. Wonder if he remembers that conversation at well.

setsubyou@lemmy.world on 10 Sep 05:58 collapse

What gets me is that even the traditional business models for LLMs are not great. Like translation, grammar checking, etc. Those existed before the boom really started. DeepL has been around for almost a decade and their services are working reasonably well and they’re still not profitable.

underline960@sh.itjust.works on 09 Sep 12:18 next collapse

Isn’t that the case with a lot of modern tech?

I vaguely recall Spotify and Uber being criticized relying on the “get big first and figure out how to monetize later” model.

(Not defending them, just wondering what’s different about AI.)

khornechips@sh.itjust.works on 09 Sep 14:06 collapse

Spotify is a music streaming service with subscription fees generating recurring revenue, it would be fine in a world without an investor class obsessed with infinite growth. Uber is to taxis what crypto is to banks, essentially exploiting a gap in regulations to undercut an existing market.

“AI” is a solution desperately looking for a problem to justify all the money and resources being wasted on it.

underline960@sh.itjust.works on 09 Sep 14:34 collapse

What are you talking about? ChatGPT, Claude, Gemini, etc. all have “subscription fees generating recurring revenue” and are famously “exploiting a gap in regulations to undercut an existing market.”

Uber took 15 years to become profitable, and Spotify took 18 years.

Again, I’m not defending any of them (they all exploit the people who make their service work), but so far AI seems to be going down the same road.

khornechips@sh.itjust.works on 09 Sep 15:07 collapse

Spotify provides a real, tangible service. I pay for access to music I get access to music.

What service does an LLM actually provide? They can’t be relied on for accurate information, they can’t reason, the only thing they seem to be able to do is psychologically manipulate their users. That makes money now, but in six months? A year? We’re already seeing usage fall despite some of the wealthiest companies on the planet burning unfathomable amounts of money.

underline960@sh.itjust.works on 09 Sep 16:03 collapse

“I pay for access to music I get access to music.” And with ChatGPT, you pay for access to an LLM, and you get access to an LLM.

Just because you personally don’t value that as a service doesn’t inherently invalidate it as a business model, now or in the future.

Netflix lost subscribers in 2011 and 2022, that didn’t kill the company. Uber stock tumbled during the pandemic and again in 2022. In 2023, Wired was writing about how “despite its popularity… [Spotify] has long struggled to turn consistent profits.”

This is a whole wave of companies where the survivors seem financially stable now, but had a long history of being propped up by venture capital and having an unclear path to profitability.

The only thing you’ve successfully shown is different so far is that you don’t think it’s a real service.

I generally agree, but I still don’t see anything that differentiates its trajectory from the Spotifys, Ubers, and Netflixes of the world.

Tollana1234567@lemmy.today on 10 Sep 06:19 collapse

The AI data centers arnt cheap to cooldown, or power. plus the “customers” are mostly other csuites and ceos anyways.

Saleh@feddit.org on 09 Sep 05:29 next collapse

That is more than a 10% loss of that customer base in 2 month.

For any industry that is huge.

CommanderCloon@lemmy.ml on 09 Sep 06:06 collapse

But they’re already not making money, losing customers during the supposed growth phase is absolutely devastating. It’s occuring all while AI is being subsidized by massive investments from the likes of microsoft and google, and many more namelesss VCs through OpenAI, anthropic etc.

sj_zero@lotide.fbxl.net on 09 Sep 03:45 next collapse

IMO, AI is a really good demo for a lot of people, but once you start using it, the gains you can get from it end up being somewhat minimal without doing some serious work.

Reminds me of 10 other technologies that if you didn't get in the world was going to end but ended up more niche than you'd expect.

chaosCruiser@futurology.today on 09 Sep 06:25 next collapse

Cyberspace, hypertext, multimedia, dot com, Web 2.0, cloud computing, SAAS, mobile, big data, blockchain, IoT, VR and so many more. Sure, they can be used for some things, but doing that takes time, effort and money. On top of that, you need to know exactly when to use these things and when to choose something completely different.

MagicShel@lemmy.zip on 09 Sep 06:40 next collapse

As someone who is excited about AI and thinks it’s pretty neat, I agree we’ve needed a level-set around the expectations. Vibe coding isn’t a thing. Replacing skilled humans isn’t a thing. It’s a niche technology that never should’ve been sold as making everything you do with it better.

We’ve got far too many companies who think adoption of AI is a key differentiator. It’s not. The key differentiator is almost always the people, though that’s not as sexy as cutting edge technology.

krunklom@lemmy.zip on 09 Sep 08:03 next collapse

The technology is fascinating and useful - for specific use cases and with an understanding of what it’s doing and what you can get out of it.

From LLMs to diffusion models to GANs there are really, really interesting use cases, but the technology simply isn’t at the point where it makes any fucking sense to have it plugged into fucking everything.

Leaving the questionable ethics many paid models’ creators have used to make their models aside, the backlash against so is understandable because it’s being shoehorned into places it just doesn’t belong.

I think eventually we may “get there” with models that don’t make so many obvious errors in their output - in fact I think it’s inevitable it will happen eventually - but we are far from that.

I do think that the “fuck ai” stance is shortsighted though, because of this. This is happening, it’s advancing quickly, and while gains on LLMs are diminishing we as a society really need to be having serious conversations about what things will look like when (and/or if, though I’m more inclined to believe it’s when) we have functional models that can are accurate in their output.

When it actually makes sense to replace virtually every profession with ai (it doesn’t right now, not by a long shot) then how are we going to deal with this as a society?

floofloof@lemmy.ca on 09 Sep 14:52 collapse

The key differentiator is almost always the people, though that’s not as sexy as cutting edge technology.

Evidently you haven’t worked with me. I’m actually quite sexy.

Damage@feddit.it on 09 Sep 07:47 next collapse

I’ve got a friend who has to lead a team of apparently terrible developers in a foreign country, he loves AI, because “if I have to deal with shitty code, send back PRs three times then do it myself, I might as well use LLMs”

And he’s like one of the nicest people I know, so if he’s this frustrated, it must be BAD.

Aceticon@lemmy.dbzer0.com on 09 Sep 14:48 collapse

I had to do this myself at one point and it can be very frustrating.

It’s basically the “tech makes lots of money” effect, which attracts lots of people who don’t really have any skill at programming and would never have gone into it if it weren’t for the money.

We saw this back in earlier tech booms and see it now in poorer countries to were lots of IT work has been outsourced - they still have the same fraction of natural techies as the rest but the demand is so large that masses of people with no real tech skill join the profession and get given actual work to do and they suck at it.

Also beware of cultural expectations and quirks - the team I had to manage were based in India and during group meetings on the phone would never admit if they did not understood something of a task they were given or if there was something missing (I believe that it was so as not to lose face in front of others), so ended up often just going with wrong assumptions and doing the wrong things. I solved this by, after any such group meeting, talking to each member of that outsourced team, individually and in a very non-judgemental way (pretty much had to pass it as “me, being unsure if I explained things correctly”) to tease from them any questions or doubts, which helped avoid tons of implementation errors from just not understanding the Requirements or the Requirements themselves lacking certain details and devs just making assumptions on their own about what should go there.

That said, even their shit code (compared to what us on the other side, who were all senior developers or above, produced) actually had a consistent underlying logic throughout the whole thing, with even the bugs being consistent (humans tend to be consistent in the kind of mistakes they make), all of which helps with figuring out what is wrong. LLMs aren’t as consistent as even incompetent humans.

paequ2@lemmy.today on 10 Sep 05:17 collapse

AI is a really good demo for a lot of people, but once you start using it, the gains you can get from it end up being somewhat minimal without doing some serious work.

I’m so sick of “AI demos” at work. Every demo goes like this.

  1. Generate text with an LLM.
  2. Don’t fact check it.
  3. Don’t verify it works.
  4. Oooh and aahhh at random numbers and charts.
  5. Higher ups all clap and say we could be 10x more productive if more people would just use AI more.

Meanwhile they ignore that zero AI projects have actually stuck around or get used in a meaningful way.

setsubyou@lemmy.world on 10 Sep 05:40 collapse

As someone who sometimes makes demos of our own AI products at work for internal use, you have no idea how much time I spend on finding demo cases where LLM output isn’t immediately recognizable as bad or wrong…

To be fair it’s pretty much only the LLM features that are like this. We have some more traditional AI features that work pretty well. I think they just tagged on LLM because that’s what’s popular right now.

Etterra@discuss.online on 09 Sep 04:45 next collapse

So instead of bursting the bubble is slowly deflating?

sexy_peach@feddit.org on 09 Sep 04:54 next collapse

That’s user rates, not the stock price

Goodeye8@piefed.social on 09 Sep 10:19 collapse

Bubble is build upon potential of the investment. It's unlikely that AI is near its invested potential which means declining usage might actually be an indicator that the bubble is about to pop. A few big investors think the potential has been reached and pull out and then it cascades into a crash.

Mrkawfee@feddit.uk on 09 Sep 05:19 next collapse

Western growth is predicated on bubbles.

Sunshine@piefed.ca on 09 Sep 05:40 next collapse

Finally now decommission the slop.

Asidonhopo@lemmy.world on 09 Sep 05:57 next collapse

The US Census Bureau keeps track of things like that? Huh… TIL

[deleted] on 09 Sep 06:31 next collapse

.

reksas@sopuli.xyz on 09 Sep 06:51 next collapse

oh the horror

psoul@lemmy.world on 09 Sep 07:16 next collapse

Nature is healing

SunSunFuego@lemmy.ml on 09 Sep 07:58 next collapse

let’s not forget the us is pumping EVERYTHING into ai, 3-4% of the gdp are just the ai economy. here’s hoping it comes crashing down on them

MoonMoon@lemmy.world on 09 Sep 08:19 next collapse

Time to short Google yet?

l_isqof@lemmy.world on 09 Sep 10:12 collapse

You might as well put your money on red at the casino.

RedGreenBlue@lemmy.zip on 09 Sep 08:46 next collapse

For the things AI is good at, like reading documentation, one should just get a local model and be done.

I think pouring as much money as big companies in the us has been doing is unwise. But when you have deep pockets, i guess you can afford to gamble.

SavageCoconut@lemmy.world on 09 Sep 09:52 collapse

Could you point me to a model to do that and instructions on how get it up and running?

FauxLiving@lemmy.world on 09 Sep 10:26 next collapse

I’m using Deepseek R1 (8B) and Gemma 3 (12B), installed using LM Studio (which pulls directly from Hugging Face).

Cethin@lemmy.zip on 09 Sep 14:27 next collapse

As the other comment says, LM Studio is probably the easiest tool. Once you’ve got it installed it’s trivial to add new models. Try some out and see what works best for you. Your hardware will be a limit on what you can run though, so keep that in mind.

null_dot@lemmy.dbzer0.com on 09 Sep 22:10 collapse

I dont have the hardware so I’m using “open web ui” to run queries on models accessible via huggingface API.

Works really well. I haven’t invested the time to understand how to use workspaces, which allow you to tune models, but aparently its doable.

probable_possum@leminal.space on 09 Sep 08:50 next collapse

I mean the automatic speech recognition and transcription capabilities are quite useful. But that’s about it, for me for now.

It could be interesting for frame interpolation in movies at some point maybe, I guess.

I dream of using it for the reliable classification of things. But I haven’t seen it working reliably, yet.

For the creation of abstracts and as a dialog system for information retrieval it doesn’t feel exact/correct / congruent enough to me.

Also: A working business plan to make money with actual AI services has yet to be found. Right now it is playing with a shiny new toy and the expectations and money of their investors. Right now they fail to deliver and the investors might get restless. Selling the business while it is still massively overrated, seems like the only way forward. But that’s just my opinion.

MonkderVierte@lemmy.zip on 09 Sep 09:19 collapse

I mean the automatic speech recognition and transcription capabilities are quite useful.

That’s what LLM are made for; text stuff, not knowledge stuff.

probable_possum@leminal.space on 09 Sep 09:46 collapse

That’s what LLM are made for;

Hence the Name? :)

jaykrown@lemmy.world on 09 Sep 09:55 next collapse

It is absolutely a bubble, but the applications that AI can be used for still remain while the models continue to get better and cheaper. Here’s the actual graph:

<img alt="" src="https://lemmy.world/pictrs/image/8b736281-b1e3-436c-b278-632c3cfd0de4.png">

r0ertel@lemmy.world on 09 Sep 14:41 collapse

This contradicts what I’m reading in that AI model costs grow with each generation, not shrink.

jaykrown@lemmy.world on 10 Sep 00:26 next collapse

That was published a year ago, highly selective, doesn’t include something like Llama 4 Maverick.

jaykrown@lemmy.world on 10 Sep 00:27 collapse

Also that is the cost to train them, not the cost to use them, which is different.

muhyb@programming.dev on 09 Sep 10:31 next collapse

Took them long.

Sam_Bass@lemmy.world on 09 Sep 10:33 next collapse

Some decent news at least

jubilationtcornpone@sh.itjust.works on 09 Sep 13:02 next collapse

Personal Anecdote

Last week I used the AI coding assistant within JetBrains DataGrip to build a fairly complex PostgreSQL function.

It put together a very well organized, easily readable function, complete with explanatory comments, that failed to execute because it was absolutely littered with errors.

I don’t think it saved me any time but it did help remove my brain block by reorganizing my logic and forcing me to think through it from a different perspective. Then again, I could have accomplished the same thing by knocking off work for the day and going to the driving range.

August27th@lemmy.ca on 09 Sep 13:16 next collapse

Then again, I could have accomplished the same thing by knocking off work for the day and going to the driving range.

Hey, look at the bright side, as long as you were chained to your desk instead, that’s all that matters.

UncleMagpie@lemmy.world on 09 Sep 14:27 next collapse

The bigger problem is that your skills are weakened a bit every time you use an assistant to write code.

floofloof@lemmy.ca on 09 Sep 14:45 next collapse

It depends how you’re using it. I use it for boilerplate code, for stubbing out classes and functions where I can tell it clearly what I want, for finding inconsistencies I might have missed, to advise me on possible tools and approaches for small things, and as a supplement to the documentation when I can’t find what I’m looking for. I don’t use it for architecting new things, writing complex and specialized code, or as a replacement for documentation. I feel like I have it fairly well contained to what it does well, so I don’t waste my time on what it does badly, and it isn’t really eating away at my coding brain because I still do the tricky bits myself.

some_kind_of_guy@lemmy.world on 10 Sep 06:07 collapse

This is exactly how it’s meant to be used. People who think it’s to be used for more than what you’ve described are not serious people.

Knock_Knock_Lemmy_In@lemmy.world on 10 Sep 06:33 collapse

There is no “meant to be used”. LLM were not created to solve a specific problem.

Honytawk@lemmy.zip on 09 Sep 16:29 next collapse

That is just dumb.

Your skills are weakened even more by copying code from someone else. Because you have the use even less of your brain to complete your task.

Yet you people don’t complain about that part at all and do it yourself all the time. For some it is even the preferred method of work.

“Using your skills less means they get weaker, who would have thought!”

With your logic, you shouldn’t use any form of help to code. Programmers should just lock themselves in a big black box until their project is finished, that will make sure their skills aren’t “weakened” by using outside help.

UncleMagpie@lemmy.world on 09 Sep 16:42 collapse

No that’s not the same thing. It’s the difference between looking up how to do something and having it done for you.

There have been multiple articles recently that show AI weakens skills.

forbes.com/…/the-dark-side-of-ai-tracking-the-dec…

Btw there’s no need to add strawman arguments with scenarios I didn’t mention.

KneeTitts@lemmy.world on 09 Sep 16:31 collapse

The bigger problem is that your skills are weakened a bit every time you use an assistant to write code

Not when you factor in that you are now doing code review for it and fixing all its mistakes…

Cethin@lemmy.zip on 09 Sep 14:33 collapse

At one point I tried to use a local model to generate something for me. It was full of errors, but after some searching online to look for a library or existing examples I found a github repo that was almost an exact copy of what it generated. The comments were the same, and the code was mostly the same, except this version wasn’t fucked up.

It turns out text prediction isn’t that great at understanding the logic of code. It’s only good at copying existing code, but it doesn’t understand why it works, so the predictive model fucks things up when it takes the less likely result. Maybe if you turn the temperature to only give the highest prediction it wouldn’t be horrible, but you might as well just search online and copy the code that it’s going to generate anyway.

7toed@midwest.social on 10 Sep 12:13 collapse

But… how else do we sell our tool as a super intelligent sentient do-it-all?

Pat_Riot@lemmy.today on 09 Sep 13:02 next collapse

They dressed up a parrot and called it the golden goose and now they’re chasing a wild goose.

MycelialMass@lemmy.world on 09 Sep 20:23 collapse

Wild parrot surely

Tollana1234567@lemmy.today on 10 Sep 06:40 collapse

An undomesticated Psittaciformes.

mechoman444@lemmy.world on 09 Sep 13:36 next collapse

Of course. Although ai, or more accurately llms do have use functions they are not the star trek computer.

I use chatgpt as a Grammer check all the time. It’s great for stuff like that. But it’s definitely not a end all be all solution to productivity.

I think corporations got excited llms could replace human labor… But it can’t.

Typhoon@lemmy.ca on 09 Sep 13:58 collapse

Grammer

Grammar.

There’s nothing AI can do that an internet pedant can’t.

floofloof@lemmy.ca on 09 Sep 14:49 next collapse

grammar

Mind your capitalization, fellow pedant.

mechoman444@lemmy.world on 09 Sep 20:19 collapse

No. Its grammer. No one says grammAR everyone says it with er. It’s spelled grammar due to tradition and nothing else. Same reason the ph is still prevelant in the English language.

Ehhhhhh the English language is terrible!

Giblet2708@lemmy.sdf.org on 10 Sep 04:16 collapse

Sure, English is terrible. Don’t forget dollar, pillar, cougar, burglar, doctor, actor, or aviator. Yet, oddly enough, somehow most people deal with them, and life goes on.

Go read about The Great Vowel Shift; it’s pretty informative.

kazerniel@lemmy.world on 09 Sep 13:43 next collapse

Fucking finally. Maybe the hype wave has crested 🤞

KneeTitts@lemmy.world on 09 Sep 16:32 collapse

finally. Maybe the hype wave has crested

Well one thing I can tell you is that art is gone, forever. They took that from us and our kids and all generations to come.

ayyy@sh.itjust.works on 09 Sep 17:56 next collapse

This is so melodramatic. Nobody is stopping you from drawing or painting or whatever.

Siegfried@lemmy.world on 09 Sep 20:15 next collapse

Naaa, AI “art” output is trash. You just need to train the eye to notize the patterns.

kazerniel@lemmy.world on 11 Sep 08:51 collapse

I don’t think that’s the case, anyone can still make art. Though it’s true, it’s even harder to make a living from art now than it already was.

kent_eh@lemmy.ca on 09 Sep 13:42 next collapse

That is good news, assuming numbers being reported by a US government agency are accurate, which is no longer a certainty.

RandAlThor@lemmy.ca on 09 Sep 14:12 next collapse

Large companies (defined as having more than 250 employees) have reduced their AI usage, according to the data (click to expand the Tweet below). The slowdown started in June, when it was at roughly 13.5%, slipping to about 12% at the end of August.

Someone explain to me how I am to see this “rate” as - is it adoption rate or usage rate? IF it is adoption rate 13.5% of all large firms are using it? and it’s declined to 12%? Or is it some sort of usage rate and if so, whatever the fuck is 12% usage?

ayyy@sh.itjust.works on 09 Sep 17:56 collapse

It means 12% of those surveyed answered “yes” to the question “Are you using AI at your company?” (Note this is not the literal question from the survey, because I can’t be assed to dig up their methodology.)

RandAlThor@lemmy.ca on 09 Sep 21:44 collapse

It’s stupidly written like AI.

eronth@lemmy.dbzer0.com on 09 Sep 15:37 next collapse

Kind of a weird title. Of course adoption would slow? The people who want it have adopted it, the people who don’t haven’t.

_haha_oh_wow_@sh.itjust.works on 09 Sep 15:48 next collapse

It would also slow if companies were told insane lies about the capability of “AI” (“it’s living having a team of PHD level experts at your disposal!”) and then companies realized that many of these promises were total bullshit.

KneeTitts@lemmy.world on 09 Sep 16:30 next collapse

We were initially excited by AI at my company, but after we used it a bit we didnt find any really meaningful use cases for it in our business model. And in most cases we spent a lot of time correcting its many errors which would actually slow down our processes…

UnderpantsWeevil@lemmy.world on 09 Sep 19:32 collapse

Marx tapping the big sign marked “Tendency of the rate of profit is to fall”, but then looking at the already unprofitable AI spin-offs and just throwing his hands up in disgust.

I think there’s an argument to be made that the AI hype got a bunch of early adopters, but failed to entice more traditional mainstream clients. But the idea that we just ran out of new AI users in… barely two years? No. Nobody is really paying for this shit in a meaningful way. Not at the Enterprise Application scale of subscriptions. That’s why Microsoft is consistently losing money (on the scale of billions) on its OpenAI investment.

If people were adopting AI like they’d adopted the latest Windows OS, these firms would be seeing a steady growth in the pool of users that would signal profitability soon (if not already). But the estimates they’re throwing out - one billion AI adoptions in barely a year - are entirely predicated on how many people just kinda popped in, looked at the web interface, and lost interest.

salacious_coaster@infosec.pub on 10 Sep 01:02 next collapse

Why is the Census Bureau tracking LLM adoption?

Lucidlethargy@sh.itjust.works on 10 Sep 02:46 next collapse

Because they are FUCKING TRASH.

Dremor@lemmy.world on 10 Sep 07:11 collapse

Not for all use cases, but for most it is.

ProgrammingSocks@pawb.social on 10 Sep 03:06 next collapse

Because they suck.

rumba@lemmy.zip on 10 Sep 03:23 next collapse

It’ll right itself when the CEOs stop investing in it and force it on their own companies.

When they’re not getting their returns, they’ll sell their stocks and stop paying for it.

It’ll eventually go back from slop generation to correction and light editing tools when venture stops paying for the hardware to run tokens and they have to pay to replace the cards. .

Tollana1234567@lemmy.today on 10 Sep 06:29 collapse

and they will drop it altogether.

Treetrimmer@sh.itjust.works on 10 Sep 04:14 collapse

That’s unfortunate because I want an excuse not to be a corporate slave

squaresinger@lemmy.world on 10 Sep 12:01 collapse

Tbh, better a corporate slave than a startup slave.