Human-level AI is not inevitable. We have the power to change course (www.theguardian.com)
from Davriellelouna@lemmy.world to technology@lemmy.world on 21 Jul 18:43
https://lemmy.world/post/33284882

#technology

threaded - newest

Asafum@feddit.nl on 21 Jul 18:50 next collapse

Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?

If capital wants it capital gets it. :(

drapeaunoir@lemmy.dbzer0.com on 21 Jul 19:19 next collapse

😳 unless we destroy capitalism? 👉🏾👈🏾

masterofn001@lemmy.ca on 21 Jul 21:10 collapse

The only problem with destroying capitalism is deciding who gets all the nukes.

BroBot9000@lemmy.world on 21 Jul 20:44 next collapse

Use Linux and don’t have any of those issues.

Get off the capitalist owned platforms.

qt0x40490FDB@lemmy.ml on 21 Jul 22:49 collapse

In the US, sure, but there have been class revolts in other nations. I’m not saying they lead to good outcomes, but king Louis XVI was rich. And being rich did not save him. There was a capitalist class in China during the cultural revolution. They didn’t make it through. If it means we won’t go extinct, why can we have a revolution to prevent extinction?

terrific@lemmy.ml on 21 Jul 20:37 next collapse

We’re not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.

Irrelevant at best, harmful at worst 🤷

qt0x40490FDB@lemmy.ml on 21 Jul 22:59 collapse

How do you know we’re not remotely close to AGI? Do you have any expertise on the issue? And expertise is not “I can download Python libraries and use them” it is “I can explain the mathematics behind what is going on, and understand the technical and theoretical challenges”.

Eranziel@lemmy.world on 21 Jul 23:19 next collapse

Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say “discuss” instead of “answer” because there is not an agreed upon answer to either of those.)

That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content they’ve consumed.

In short, they cannot understand a concept that humans haven’t yet understood, and can only echo solutions that humans have already tried.

qt0x40490FDB@lemmy.ml on 21 Jul 23:25 collapse

I don’t see why AGI must be conscious, and the fact that you even bring it up makes me think you haven’t thought too hard about any of this.

When you say “novel answers” what is it you mean? The questions on the IMO have never been asked to any human before the Math Olympiad, and almost all humans cannot answer those quesion.

Why does answering those questions not count as novel? What is a question whose answer you would count as novel, and which you yourself could answer? Presuming that you count yourself as intelligent.

gandalf_der_12te@discuss.tchncs.de on 21 Jul 23:38 collapse

What is a question whose answer you would count as novel, and which you yourself could answer?

AI does not have genetics and therefore no instincts that was shaped by billions of years of evolution,

so when presented with a challenge that doesn’t appear in its training data, such as whether to love your neighbor or not, it might not be able to answer because that exact scenario doesn’t appear in its training data.

humans can answer it instinctively because we have billions of years of experience behind us backing us up and providing us with a solid long-term positive decision-making capability.

AnarchoEngineer@lemmy.dbzer0.com on 22 Jul 02:06 next collapse

Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.

I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.

Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.

This has two major preventative issues for AGI: input size limits, and determinism.

The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)

This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.

Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…

Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.

ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.

All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.

This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.

Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.

Now there are some more exotic neural networks architectures that could surpass these limitations.

Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.

However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.

You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).

SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changi

Badabinski@kbin.earth on 22 Jul 04:17 collapse

This is a fantastic response. I'm saving this so I can use it to show people that LLMs are not thinking machines.

terrific@lemmy.ml on 22 Jul 04:13 collapse

Do you have any expertise on the issue?

I hold a PhD in probabilistic machine learning and advise businesses on how to use AI effectively for a living so yes.

IMHO, there is simply nothing indicating that it’s close. Sure LLMs can do some incredibly clever sounding word-extrapolation, but the current “reasoning models” still don’t actually reason. They are just LLMs with some extra steps.

There is lots of information out there on the topic so I’m not going to write a long justification here. Gary Marcus has some good points if you want to learn more about what the skeptics say.

qt0x40490FDB@lemmy.ml on 22 Jul 04:19 collapse

So, how would you define AGI, and what sorts of tasks require reasoning? I would have thought earning the gold medal on the IMO would have been a reasoning task, but I’m happy to learn why I’m wrong.

gandalf_der_12te@discuss.tchncs.de on 21 Jul 23:28 next collapse

AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.

When there is lower demand for human labor, according to the rule of supply and demand, prices (aka. wages) for human labor go down.

The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. That’s what should actually be discussed.

Zorque@lemmy.world on 22 Jul 00:30 collapse

But scary robots will take over the world! That’s what all the movies are about! If it’s in a movie, it has to be real.

palordrolap@fedia.io on 21 Jul 23:43 next collapse

Cataclysms notwithstanding, human-level AI is inevitable. That doesn't have to mean that it'll be next week, or even next century, but it will happen.

The only way it won't is if humans are wiped out. (And even then there might be extra-terrestrials who get there where we didn't. Human-level doesn't have to mean invented by humans.)

Etterra@discuss.online on 22 Jul 00:46 next collapse

Honestly I welcome our AI overlords. They can’t possibly fuck things up harder than we have.

AngryRobot@lemmy.world on 22 Jul 02:35 collapse

Can’t they?

Deathgl0be@lemmy.world on 22 Jul 02:27 next collapse

It’s just a cash grab to take peoples jobs and give it to a chat bot that’s fed Wikipedia’s data on crack.

SpicyLizards@reddthat.com on 22 Jul 04:19 collapse

We can change course if we can change course on capitalism