Is AI progress slowing down? (open.substack.com)
from Joker@sh.itjust.works to technology@lemmy.world on 20 Dec 08:32
https://sh.itjust.works/post/29757431

#technology

threaded - newest

Alphane_Moon@lemmy.world on 20 Dec 08:43 next collapse

“The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Everyone is looking for the next thing,” Sutskever said. “Scaling the right thing matters more now than ever.”

The “wonder and discovery” phrasing is a dead giveaway for what’s going on.

hendrik@palaver.p3x.de on 20 Dec 08:50 next collapse

Btw, that's a quote, and not a statement of that article/essay.

Alphane_Moon@lemmy.world on 20 Dec 08:52 collapse

Yup, Sutskever is a CEO of a LLM/AI company.

BMTea@lemmy.world on 20 Dec 10:40 collapse

Could you explain what you mean?

Alphane_Moon@lemmy.world on 20 Dec 12:39 collapse

I see this as PR speak for admitting that they can’t continue to achieve massive generational improvements in LLMs (let alone anything close to AGI) even with exponential increases in computing power. They are hitting a very costly brick wall.

rottingleaf@lemmy.world on 20 Dec 12:55 next collapse

They didn’t have much more than that computing power in the first place.

This whole AI hype is the same as the “big data” hype, just with decorations. That we now have enormous data (people are so connected they are visible to the world in detail unachievable before), enormous network connectivity and enormous computing resources and enormous centralization, so let’s just combine these into something radically more powerful than what all these individuals with their boring science were trying to do without such awesome power. Only it’s not more efficient somehow and they still can’t understand why.

It’s a reactionary breed of technologies. In year 1824 an artisan couldn’t compete with a factory. But year 2024 is not like this. The radical breakthrough in architecture of digital economy is not a dumb repetition of the radical breakthrough of manufacturing economies back then. And it hasn’t yet happened.

And speaking of visions for future, in my humble opinion the initial intent behind Java and the initial intent behind Web and the initial intent behind Unix, and more recently peer-to-peer systems with smart contracts and distributed storage and services, and cryptographic identities, and other such things, are the direction in which that breakthrough is to be found. Not in mimicking some dumbed down explanations from school history lessons, which can’t possibly be complete enough.

AnAmericanPotato@programming.dev on 20 Dec 15:08 collapse

Yep. AGI is still science fiction. Anyone telling you otherwise is probably just trying to fool investors. Ignore anyone who is less than three degrees of separation away from a marketing department.

The low-hanging fruit is quickly getting picked, so we’re bound to see a slowdown in advancement. And that’s a good thing. We don’t really need better language models at this point; we need better applications that use them.

The limiting factor is not so much hardware as it is our knowledge and competence in software architecture. As a historical example, 10 short years ago, computers were nowhere near top-level at Go. Then DeepMind developed AlphaGo, which was a huge leap forward and could beat a top pro. It ran on a supercomputer cluster. Thanks to the research breakthroughs around AlphaGo, within a few years had similar AI that could run on any smartphone and could beat any human player. It’s not because consumer hardware got that much faster; it’s because we learned how to make better software. Modern Go engines are a fraction of the size of AlphaGo, and generate similar or better quality results with a tiny fraction of the operations. And it seems like we’re pretty close to the limit now. A supercomputer can’t play all that much better than my laptop.

Similarly, a few years ago something like ChatGPT 3 needed a supercomputer. Now you can run a model with similar performance on a high-end phone, or a low-end laptop. Again, it’s not because hardware has improved; the difference is the software. My current laptop (2021 model) is older than ChatGPT 3 (publicly launched in 2022) and it can easily run superior models.

But the returns inevitably diminish. There’s a limit somewhere. It’s hard to say exactly where, but entropy’s gonna getcha sooner or later. You simply cannot fit more than 16GB of information in a 16GB model; you can only inch closer to that theoretical limit, and specialize into smaller scopes. At some point the world will realize that trying to encode everything into a model is a dumb idea. We already have better tools for that.

Glasgow@lemmy.ml on 20 Dec 09:07 next collapse

No

lurch@sh.itjust.works on 20 Dec 14:25 next collapse

feels more like regressing tbh. i can’t pinpoint it, but whenever i prompted an AI recently, the first answer was useless. i always had to follow up. i didn’t have to early this year.

atrielienz@lemmy.world on 21 Dec 06:17 collapse

Data is tainted by people who now know that what they generate is being funneled into these models. Additionally models are having guardrails added to prevent problematic responses. Add to that the lack of clean data sets for training data and the amount of Gen AI generated data that is all over the web and you get a fairly clear picture of why the answers are getting worse.

NineMileTower@lemmy.world on 20 Dec 16:10 collapse

The pace of AI progress is dynamic and depends on how you define “progress.” Here are a few factors to consider:

Areas of Rapid Progress:

  1. Generative AI: Models like ChatGPT, DALL-E, and others have advanced significantly, with improvements in realism, efficiency, and usability.
  2. Application-Specific AI: AI applications in healthcare, finance, and creative industries are becoming more refined and widely adopted.
  3. Hardware Advances: New chips like NVIDIA’s GPUs and specialized AI processors are speeding up AI computations, enabling larger and more complex models.

Signs of Slowing Down:

  1. Diminishing Returns on Scaling: Larger models often bring smaller performance improvements relative to their size and cost.
  2. Compute and Cost Barriers: Training state-of-the-art models requires immense computational resources, which can limit innovation to a few big players.
  3. Regulation and Ethical Concerns: Increased scrutiny and regulation could slow the release and development of new AI systems.
  4. Bottlenecks in Data: High-quality data for training is becoming harder to obtain, and there are growing concerns about data privacy and ethics.

Perspective:

While some aspects might appear to be slowing down, others are evolving rapidly. Shifts in focus from building “bigger” models to creating “smarter” and more efficient ones may redefine progress in a more sustainable way.

What’s your take? Are you seeing specific areas where you feel progress is slowing?

jrs100000@lemmy.world on 21 Dec 06:02 collapse

Your analysis captures the multifaceted nature of AI progress well, and I largely agree that the perception of speed depends on how progress is defined. Here’s my take:

Areas Where Progress Feels Rapid

  • Generative AI: Beyond ChatGPT and DALL-E, there’s notable progress in real-time applications like conversational agents, video synthesis, and multimodal systems (e.g., combining text, image, and speech capabilities). The focus on user-friendliness and API integrations is also accelerating adoption.
  • Hardware: The emergence of neuromorphic computing and photonic processors could represent the next leap, addressing some of the bottlenecks in scaling.

Where Progress Might Be Slowing

  • Model Scaling: You’re absolutely right about diminishing returns. While scaling models has led to significant breakthroughs, the marginal utility of increasing size has dropped, prompting a pivot toward efficiency (e.g., fine-tuning smaller, task-specific models).
  • Economic and Access Barriers: With AI development increasingly dominated by large companies, the democratization of innovation is at risk. This concentration could slow down grassroots advancements, which have historically driven many breakthroughs.

Shifts in Focus

Progress is becoming more qualitative than quantitative, with emphasis on:

  1. Efficiency: Sparse models, transfer learning, and techniques like distillation are becoming more prominent, offering alternatives to brute-force scaling.
  2. Ethics and Safety: While often framed as a “slowing” factor, these considerations are crucial for long-term progress and societal acceptance.
  3. Applications Beyond the Obvious: AI is entering domains like scientific discovery, climate modeling, and personalized medicine, which may have slower, more deliberate progress but could yield profound impacts.

Your Question: Signs of Progress Slowing?

I see areas like:

  • Regulation and Trust: Societal pushback and increased regulatory scrutiny (e.g., around deepfakes or data privacy) can decelerate deployment but also guide ethical innovation.
  • Data Bottlenecks: You nailed this point. The challenge isn’t just quantity but ensuring high-quality, unbiased, and ethically sourced data.

Final Thought

AI progress is less about speed and more about direction. Slower, deliberate progress in areas like ethics, sustainability, and accessibility might not look “dynamic” but is essential for ensuring AI benefits society broadly. The true “progress” may lie in creating smarter, safer, and more inclusive systems rather than faster, bigger, and flashier ones.