Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?
If capital wants it capital gets it. :(
drapeaunoir@lemmy.dbzer0.com
on 21 Jul 19:19
nextcollapse
đł unless we destroy capitalism? đđžđđž
In the US, sure, but there have been class revolts in other nations. Iâm not saying they lead to good outcomes, but king Louis XVI was rich. And being rich did not save him. There was a capitalist class in China during the cultural revolution. They didnât make it through. If it means we wonât go extinct, why can we have a revolution to prevent extinction?
How do you know weâre not remotely close to AGI? Do you have any expertise on the issue? And expertise is not âI can download Python libraries and use themâ it is âI can explain the mathematics behind what is going on, and understand the technical and theoretical challengesâ.
Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say âdiscussâ instead of âanswerâ because there is not an agreed upon answer to either of those.)
That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content theyâve consumed.
In short, they cannot understand a concept that humans havenât yet understood, and can only echo solutions that humans have already tried.
I donât see why AGI must be conscious, and the fact that you even bring it up makes me think you havenât thought too hard about any of this.
When you say ânovel answersâ what is it you mean? The questions on the IMO have never been asked to any human before the Math Olympiad, and almost all humans cannot answer those quesion.
Why does answering those questions not count as novel? What is a question whose answer you would count as novel, and which you yourself could answer? Presuming that you count yourself as intelligent.
gandalf_der_12te@discuss.tchncs.de
on 21 Jul 23:38
collapse
What is a question whose answer you would count as novel, and which you yourself could answer?
AI does not have genetics and therefore no instincts that was shaped by billions of years of evolution,
so when presented with a challenge that doesnât appear in its training data, such as whether to love your neighbor or not, it might not be able to answer because that exact scenario doesnât appear in its training data.
humans can answer it instinctively because we have billions of years of experience behind us backing us up and providing us with a solid long-term positive decision-making capability.
AnarchoEngineer@lemmy.dbzer0.com
on 22 Jul 02:06
nextcollapse
Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.
I loathe python irrationally (and I guess Iâm masochist who likes to reinvent the wheel programming wise lol) so Iâve written my own neural nets from scratch a few times.
Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.
This has two major preventative issues for AGI: input size limits, and determinism.
The weight matrices are set for a certain number of inputs. Unfortunately you canât just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)
This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.
Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And itâs slow af) Doesnât sound like a mind to meâŚ
Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets youâve probably used are not thinking⌠at all. They literally are just a complicated predictive algorithm like linear regression. Iâm dead serious. Itâs basically regression just in a very high dimensional vector space.
ChatGPT does not think about its answer. It doesnât have any sort of object identification or thought delineation because it doesnât have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If itâs off, you do some math to figure out what weight modifications would have lead it to a better answer.
All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they werenât trained on because itâs similar enough to the questions they werenât trained on⌠but itâs not thinking. It isnât doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.
This is why LLMs canât do math. Because they donât actually see the numbers, they donât know what numbers are. They donât know anything at all because theyâre incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.
Models can only âknowâ as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they donât. And you canât just say âyou were wrongâ because the model isnât transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to âlearnâ which again takes time and really isnât learning or intelligence at all.
Now there are some more exotic neural networks architectures that could surpass these limitations.
Currently Iâm experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.
However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.
You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).
SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changi
I hold a PhD in probabilistic machine learning and advise businesses on how to use AI effectively for a living so yes.
IMHO, there is simply nothing indicating that itâs close. Sure LLMs can do some incredibly clever sounding word-extrapolation, but the current âreasoning modelsâ still donât actually reason. They are just LLMs with some extra steps.
There is lots of information out there on the topic so Iâm not going to write a long justification here. Gary Marcus has some good points if you want to learn more about what the skeptics say.
So, how would you define AGI, and what sorts of tasks require reasoning? I would have thought earning the gold medal on the IMO would have been a reasoning task, but Iâm happy to learn why Iâm wrong.
gandalf_der_12te@discuss.tchncs.de
on 21 Jul 23:28
nextcollapse
AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.
When there is lower demand for human labor, according to the rule of supply and demand, prices (aka. wages) for human labor go down.
The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. Thatâs what should actually be discussed.
Cataclysms notwithstanding, human-level AI is inevitable. That doesn't have to mean that it'll be next week, or even next century, but it will happen.
The only way it won't is if humans are wiped out. (And even then there might be extra-terrestrials who get there where we didn't. Human-level doesn't have to mean invented by humans.)
Etterra@discuss.online
on 22 Jul 00:46
nextcollapse
Honestly I welcome our AI overlords. They canât possibly fuck things up harder than we have.
threaded - newest
Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?
If capital wants it capital gets it. :(
đł unless we destroy capitalism? đđžđđž
The only problem with destroying capitalism is deciding who gets all the nukes.
Use Linux and donât have any of those issues.
Get off the capitalist owned platforms.
In the US, sure, but there have been class revolts in other nations. Iâm not saying they lead to good outcomes, but king Louis XVI was rich. And being rich did not save him. There was a capitalist class in China during the cultural revolution. They didnât make it through. If it means we wonât go extinct, why can we have a revolution to prevent extinction?
Weâre not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.
Irrelevant at best, harmful at worst đ¤ˇ
How do you know weâre not remotely close to AGI? Do you have any expertise on the issue? And expertise is not âI can download Python libraries and use themâ it is âI can explain the mathematics behind what is going on, and understand the technical and theoretical challengesâ.
Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say âdiscussâ instead of âanswerâ because there is not an agreed upon answer to either of those.)
That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content theyâve consumed.
In short, they cannot understand a concept that humans havenât yet understood, and can only echo solutions that humans have already tried.
I donât see why AGI must be conscious, and the fact that you even bring it up makes me think you havenât thought too hard about any of this.
When you say ânovel answersâ what is it you mean? The questions on the IMO have never been asked to any human before the Math Olympiad, and almost all humans cannot answer those quesion.
Why does answering those questions not count as novel? What is a question whose answer you would count as novel, and which you yourself could answer? Presuming that you count yourself as intelligent.
AI does not have genetics and therefore no instincts that was shaped by billions of years of evolution,
so when presented with a challenge that doesnât appear in its training data, such as whether to love your neighbor or not, it might not be able to answer because that exact scenario doesnât appear in its training data.
humans can answer it instinctively because we have billions of years of experience behind us backing us up and providing us with a solid long-term positive decision-making capability.
Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.
I loathe python irrationally (and I guess Iâm masochist who likes to reinvent the wheel programming wise lol) so Iâve written my own neural nets from scratch a few times.
Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.
This has two major preventative issues for AGI: input size limits, and determinism.
The weight matrices are set for a certain number of inputs. Unfortunately you canât just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)
This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.
Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And itâs slow af) Doesnât sound like a mind to meâŚ
Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets youâve probably used are not thinking⌠at all. They literally are just a complicated predictive algorithm like linear regression. Iâm dead serious. Itâs basically regression just in a very high dimensional vector space.
ChatGPT does not think about its answer. It doesnât have any sort of object identification or thought delineation because it doesnât have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If itâs off, you do some math to figure out what weight modifications would have lead it to a better answer.
All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they werenât trained on because itâs similar enough to the questions they werenât trained on⌠but itâs not thinking. It isnât doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.
This is why LLMs canât do math. Because they donât actually see the numbers, they donât know what numbers are. They donât know anything at all because theyâre incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.
Models can only âknowâ as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they donât. And you canât just say âyou were wrongâ because the model isnât transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to âlearnâ which again takes time and really isnât learning or intelligence at all.
Now there are some more exotic neural networks architectures that could surpass these limitations.
Currently Iâm experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.
However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.
You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).
SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changi
This is a fantastic response. I'm saving this so I can use it to show people that LLMs are not thinking machines.
I hold a PhD in probabilistic machine learning and advise businesses on how to use AI effectively for a living so yes.
IMHO, there is simply nothing indicating that itâs close. Sure LLMs can do some incredibly clever sounding word-extrapolation, but the current âreasoning modelsâ still donât actually reason. They are just LLMs with some extra steps.
There is lots of information out there on the topic so Iâm not going to write a long justification here. Gary Marcus has some good points if you want to learn more about what the skeptics say.
So, how would you define AGI, and what sorts of tasks require reasoning? I would have thought earning the gold medal on the IMO would have been a reasoning task, but Iâm happy to learn why Iâm wrong.
AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.
When there is lower demand for human labor, according to the rule of supply and demand, prices (aka. wages) for human labor go down.
The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. Thatâs what should actually be discussed.
But scary robots will take over the world! Thatâs what all the movies are about! If itâs in a movie, it has to be real.
Cataclysms notwithstanding, human-level AI is inevitable. That doesn't have to mean that it'll be next week, or even next century, but it will happen.
The only way it won't is if humans are wiped out. (And even then there might be extra-terrestrials who get there where we didn't. Human-level doesn't have to mean invented by humans.)
Honestly I welcome our AI overlords. They canât possibly fuck things up harder than we have.
Canât they?
Itâs just a cash grab to take peoples jobs and give it to a chat bot thatâs fed Wikipediaâs data on crack.
We can change course if we can change course on capitalism