Artificial Superintelligence Could Arrive by 2027, Scientist Predicts (futurism.com)
from return2ozma@lemmy.world to technology@lemmy.world on 16 Jun 2024 02:42
https://lemmy.world/post/16578596

#technology

threaded - newest

autotldr@lemmings.world on 16 Jun 2024 02:45 next collapse

This is the best summary I could come up with:


We may not have reached artificial general intelligence (AGI) yet, but as one of the leading experts in the theoretical field claims, it may get here sooner rather than later.

During his closing remarks at this year’s Beneficial AGI Summit in Panama, computer scientist and haberdashery enthusiast Ben Goertzel said that although people most likely won’t build human-level or superhuman AI until 2029 or 2030, there’s a chance it could happen as soon as 2027.

After that, the SingularityNET founder said, AGI could then evolve rapidly into artificial superintelligence (ASI), which he defines as an AI with all the combined knowledge of human civilization.

Last fall, for instance, Google DeepMind co-founder Shane Legg reiterated his more than decade-old prediction that there’s a 50/50 chance that humans invent AGI by the year 2028.

Best known as the creator of Sophia the humanoid robot, Goertzel has long theorized about the date of the so-called “singularity,” or the point at which AI reaches human-level intelligence and subsequently surpasses it.

Then there’s the assumption that the evolution of the technology would continue down a linear pathway as if in a vacuum from the rest of human society and the harms we bring to the planet.


The original article contains 524 words, the summary contains 201 words. Saved 62%. I’m a bot and I’m open source!

Sanctus@lemmy.world on 16 Jun 2024 04:15 next collapse

And what’s the power consumption rates on these super AIs? Can we afford that with our power grids? Our current situation with existing AI is already getting murky.

SturgiesYrFase@lemmy.ml on 16 Jun 2024 08:00 next collapse

Don’t worry, OpenAI has made a deal with a company that’s making Fusion power also heavily connected to Sam Altman that hasn’t actually done anything exceptional yet. So when they finally crack Fusion then AI can just be powered by that!

Sanctus@lemmy.world on 16 Jun 2024 20:42 collapse

Sounds like me trying to plan my Dyson Sphere at the beginning of a Stellaris game.

SturgiesYrFase@lemmy.ml on 16 Jun 2024 22:05 collapse

Been trying out the demo for a game called Airborne Empire. You should give it a go, you’ll have more moments like this.

Heywaitaminute@lemmy.world on 16 Jun 2024 15:22 collapse

Humanity: “Super AI, what’s the best way to slow climate change?” Super AI: “Turn me off.”

pdxfed@lemmy.world on 16 Jun 2024 05:09 next collapse

And by super intelligence we mean connected to a lot of things and able to wreak significant havoc, but absolutely fucking worthless for complex thought.

#Skynub

brsrklf@jlai.lu on 16 Jun 2024 06:39 next collapse

Beneficial AGI Summit

Oh good, they’re the ones who want a nice AI overlord.

uriel238@lemmy.blahaj.zone on 16 Jun 2024 23:46 collapse

To be fair, current human overlords are presenting a strong case that human beings cannot govern themselves at large scale (e.g. more than 500 people in a society) so a nice, public-servicing AI overlord is a pretty good pipe dream.

I don’t know if it’s feasible at all, but man we’d be lucky if we made one.

tal@lemmy.today on 16 Jun 2024 07:28 next collapse

I mean, theoretically, yes.

I mean, it could be that some guy in his basement has been working on it in total secrecy and it shows up tomorrow.

But my guess is that the likely timeline is further out than either.

I seriously doubt that what we’re going to see is a single “Eureka” moment that gives us both AGI and manages to greatly surpass humans.

I would expect to see a more-incremental process, where publicly-visible systems get closer and closer to approaching that. And what OpenAI and friends are doing isn’t close. It’s cool, is useful for a lot of things, but isn’t a generalized system for solving problems.

sugar_in_your_tea@sh.itjust.works on 17 Jun 2024 04:56 collapse

Exactly. If you look at timelines for significant human achievement, you’ll think innovation comes in waves. But if you zoom in a bit, it’s really a bunch of ripples leading to pretty steady innovation.

For example, EVs exploded with Tesla, but they’ve been around for decades, they just didn’t catch on. The innovation to get there was steady, but the adoption was quick once a viable product was available and marketed well.

The same is true for AI. I learned about generative AI in college over a decade ago, and the source material was also old (IIRC the old lisp machines were supposed to be used for AI). It exploded because it got just good enough to be viable, and it was marketed well. The actual innovation was quite gradual.

technocrit@lemmy.dbzer0.com on 16 Jun 2024 14:53 next collapse

“scientist”

0x0@programming.dev on 16 Jun 2024 18:46 collapse

Right around the time of the linux desktop then.

KISSmyOSFeddit@lemmy.world on 16 Jun 2024 20:47 collapse

The year of the Linux desktop is already here, just not the way the geeks hoped. Most people do their everyday computing on phones now, most phones run on a Linux kernel.
Windows 11 comes with WSL, and the entire OS is mostly a front-end for Microsoft’s cloud services now, which run on Linux.

0x0@programming.dev on 17 Jun 2024 09:22 collapse

…none of those are desktops?