Keep the Future Human: How Unchecked Development of Smarter-Than-Human, Autonomous, General-Purpose AI Systems Will Almost Inevitably Lead to Human Replacement. But it Doesn't Have to. (keepthefuturehuman.ai)
from Pro@programming.dev to technology@lemmy.world on 25 May 22:34
https://programming.dev/post/31024075

Dramatic advances in artificial intelligence over the past decade (for narrow-purpose AI) and the last several years (for general-purpose AI) have transformed AI from a niche academic field to the core business strategy of many of the world’s largest companies, with hundreds of billions of dollars in annual investment in the techniques and technologies for advancing AI’s capabilities.

We now come to a critical juncture. As the capabilities of new AI systems begin to match and exceed those of humans across many cognitive domains, humanity must decide: how far do we go, and in what direction?

AI, like every technology, started with the goal of improving things for its creator. But our current trajectory, and implicit choice, is an unchecked race toward ever-more powerful systems, driven by economic incentives of a few huge technology companies seeking to automate large swathes of current economic activity and human labor. If this race continues much longer, there is an inevitable winner: AI itself – a faster, smarter, cheaper alternative to people in our economy, our thinking, our decisions, and eventually in control of our civilization.

But we can make another choice: via our governments, we can take control of the AI development process to impose clear limits, lines we won’t cross, and things we simply won’t do – as we have for nuclear technologies, weapons of mass destruction, space weapons, environmentally destructive processes, the bioengineering of humans, and eugenics. Most importantly, we can ensure that AI remains a tool to empower humans, rather than a new species that replaces and eventually supplants us.

This essay argues that we should keep the future human by closing the “gates” to smarter-than-human, autonomous, general-purpose AI – sometimes called “AGI” – and especially to the highly-superhuman version sometimes called “superintelligence.” Instead, we should focus on powerful, trustworthy AI tools that can empower individuals and transformatively improve human societies’ abilities to do what they do best. The structure of this argument follows in brief.

#technology

threaded - newest

solrize@lemmy.ml on 25 May 22:39 next collapse

ifanyonebuildsit.com argues the opposite (“if anyone builds it, everyone dies”). It’s not out yet though.

cecilkorik@lemmy.ca on 26 May 00:40 collapse

That’s arguing the same thing.

lectricleopard@lemmy.world on 26 May 02:19 next collapse

Where are these AGI systems? All I hear about is LLMs fooling execs, which is basically them just falling for a fast talking computer.

cecilkorik@lemmy.ca on 26 May 03:48 collapse

Fooling people is evidently all you need to do to become President of the United States and Commander In Chief of the world’s largest military with personal control over a massive stockpile of nuclear weapons. Fast talking computers could be dangerous when they’re infinitely faster, and probably smarter and slightly less neurotic than the current president. “Hey, come to think of it, has anyone ever even seen the 2028 president-elect on anything other than a screen?”

lectricleopard@lemmy.world on 26 May 04:02 collapse

You’re missing my point. An LLM can’t be “smarter” or “smart” at all. It isn’t sentient or conscious. It doesn’t even have a stable internal model of the world. What people call “hallucinations” are simply the random words selected at the edges of its convincing predictive power. Personification of LLMs is all marketing. They’re really just well presented statistical models.

cecilkorik@lemmy.ca on 26 May 04:23 collapse

I think you’re missing mine. I know it’s not clever. I think everything it creates is either slop, plagiarism and almost always both.

My point is: An arbitrary random number generator without any stable internal model of the world would still be a bad thing if it can, without any conscious intention, trick/confuse people into thinking its so awesome and clever that they choose it to be emperor of Earth, leader of the economy, decider of reality, and build it a great throne upon which they can worship it and and an altar to burn oil on as a sacrifice to the environment. That’s what LLMs are doing. It doesn’t matter whether the LLMs intend to, it doesn’t matter whether they have intentions at all. What matters is that it’s so “well presented” that people fall for it. It’s the effectiveness it has at making people fall for it that’s the problem. Dismissing those people as weak, naive, stupid can’t be done because their actions matter, their votes matter, their financial choices matter, they’re part of the civilization we live in, and frankly, they seem to be the majority.

lectricleopard@lemmy.world on 26 May 05:19 collapse

I think where we may differ in opinion is where the power lies in this situation. To me, it’s all about having the money for the compute, which means its propaganda for the wealthy. That’s nothing new. I do not expect a significant increase in societal support for these types of messages. At least not on a historical scale.

You seem to imply the LLM can at the very least have an emergent agenda, or seeming agenda, of its own. I do not believe that to be the case. LLMs that generate output that doesn’t align with the monied interests will be considered misaligned and discarded.

WhyJiffie@sh.itjust.works on 26 May 03:01 next collapse

not just AI systems, but implants too. we need to do everything to make sure it won’t become a necessity, on any level, in any circumstance. but I fear that won’t be enough.

NigelFrobisher@aussie.zone on 26 May 03:39 next collapse

If they’re smarter than us they won’t want to work for us.

MyFriendGodzilla@lemmy.world on 26 May 06:02 next collapse

I dont think we should worry about take overs… LLM’s are dumber than rocks and will never actually “think”, and AGI is still make-believe. The only thing real about AI is the hype. I am waiting for the AI crash so i can watch these megacorps autocannibalize and die like the .com copies they are.

echodot@feddit.uk on 26 May 06:27 collapse

Can you replace politicians I feel like that would actually be an improvement. Hell it’d probably be an improvement if the current system’s replaced politicians.

To be honest though I’ve never seen any evidence that AGI is inevitable, it’s perpetually 6 months away except in 6 months it’ll still be 6 months away.