autotldr@lemmings.world
on 05 Jun 03:50
nextcollapse
đ¤ Iâm a bot that provides automatic summaries for articles:
Click here to see the summary
In an interview with The New York Times, former OpenAI governance researcher Daniel Kokotajlo accused the company of ignoring the monumental risks posed by artificial general intelligence (AGI) because its decision-makers are so enthralled with its possibilities.
Kokotajloâs spiciest claim to the newspaper, though, was that the chance AI will wreck humanity is around 70 percent â odds you wouldnât accept for any major life event, but that OpenAI and its ilk are barreling ahead with anyway.
The 31-year-old Kokotajlo told the NYT that after he joined OpenAI in 2022 and was asked to forecast the technologyâs progress, he became convinced not only that the industry would achieve AGI by the year 2027, but that there was a great probability that it would catastrophically harm or even destroy humanity.
Kokotajlo became so convinced that AI posed massive risks to humanity that eventually, he personally urged OpenAI CEO Sam Altman that the company needed to âpivot to safetyâ and spend more time implementing guardrails to reign in the technology rather than continue making it smarter.
Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had âlost confidence that OpenAI will behave responsiblyâ as it continues trying to build near-human-level AI.
âWeâre proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,â the company said in a statement after the publication of this piece.
â
Saved 56% of original text.
thingsiplay@beehaw.org
on 05 Jun 03:57
nextcollapse
How did he calculate the 70% chance? Without an explanation this opinion is as much important as a Reddit post. Itâs just marketing fluff talk, so people talk about AI and in return a small percentage get converted into people interested into AI. Letâs call it clickbait talk.
First he talks about high chance that humans get destroyed by AI. Follows with a prediction it would achieve AGI in 2027 (only 3 years from now). No. Just no. There is a loong way to get general intelligence. But isnât he trying to sell you why AI is great? He follows with:
âWeâre proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,â
Ah yes, he does.
LibertyLizard@slrpnk.net
on 05 Jun 03:59
nextcollapse
Insider from OpenAI PR department speaks out!
joelfromaus@aussie.zone
on 05 Jun 09:54
nextcollapse
How did he calculate the 70% chance?
Maybe they asked ChatGPT?
MagicShel@programming.dev
on 05 Jun 10:27
collapse
ChatGPT says 1-5%, but I told it to give me nothing but a percentage and it gave me a couple of paragraphs like a kid trying to distract from the answer by surrounding it with bullshit. I think itâs onto usâŚ
(I kid. I attribute no sentience or intelligence to ChatGPT.)
eveninghere@beehaw.org
on 05 Jun 14:31
nextcollapse
This is a horoscope trick. They can always say AI destroyed humanity.
Trump won in 2016 and there was Cambridge Analytica doing data analysis: AI technology destroyed humanity!
Israel used AI-guided missiles to attack Gaza: AI destroyed humanity!
Whatever. You can point at whatever catastrophe and there is always AI behind because already in 2014 AI is a basic technology used everywhere.
The person who predicted 70% chance of AI doom is Daniel Kokotajlo, who quit OpenAI because of it not taking this seriously enough. The quote you have there is a statement by OpenAI, not by Kokotajlo, this is all explicit in the article. The idea that this guy is motivated by trying to do marketing for OpenAI is just wrong, the article links to some of his extensive commentary where he is advocating for more government oversight specifically of OpenAI and other big companies instead of the favorable regulations that company is pushing for. The idea that his belief in existential risk is disingenuous also doesnât make sense, itâs clear that he and other people concerned about this take it very seriously.
This fear mongering is just beneficial to Altman. If his product is powerful enough to be a threat to humanity then it is also powerful enough to be capable of many useful things, things it has not proven itself to be capable of. Ironically spreading fear about its capabilities will likely raise investment, so if you actually are afraid of openai somehow arriving at agi that is dangerous then you should really be trying to convince people of its lack of real utility.
Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had âlost confidence that OpenAI will behave responsiblyâ as it continues trying to build near-human-level AI.
I donât think that he stands to benefit.
He also didnât say that OpenAI was on the brink of having something like this either.
Like, I donât think all the fighting at OpenAI and people being ejected and such is all a massive choreographed performance. I think that there have been people who really strongly disagree with each other.
I absolutely think that AGI has the potential to post existential risks to humanity. I just donât think that OpenAI is anywhere near building anything capable of that. But if youâre trying to build towards such a thing, the risks are something that I think a lot of people would keep in mind.
I think that human level AI is very much technically possible. We can do it ourselves, and we have hardware with superior storage and compute capacity. The problem we havenât solved is the software side. And I can very easily believe that we may get there not all that far in the future. Years or decades, not centuries down the road.
I didnât think it was a choreographed publicity stunt. I just know Altman has used AI fear in the past to keep people from asking rational questions like âWhat can this actually do?â He obviously stands to gain from people thinking they are on the verge of agi. And someone looking for a new job in the field also has to gain from it.
As for the software thing, if itâs done by someone it wonât be openai and megacorporations following in its footsteps. They seem insistent at throwing more data (of diminishing quality) and more compute (an impractical amount) at the same style of models hoping theyâll reach some kind of tipping point.
django@discuss.tchncs.de
on 05 Jun 04:48
nextcollapse
The energy demand of AI will harm humanity, because we keep feeding it huge amounts of energy produced by burning fossile fuels.
I just realized something: since most people have no idea what AI is, it could easily be used to scam people. I think that will be itâs main function originally.
Like the average person does not have access to real time stock data. You could make a fake AI program that pretends to be a trading algorithm and makes a ton of pretend money as the mark watches. The data would be 100% real and verifiable, just picked a few seconds after the the fact.
Since most people care a lot about money, this will be some of the first widespread applications of real time AI. Just tricking people out of money.
scrubbles@poptalk.scrubbles.tech
on 05 Jun 05:58
nextcollapse
Yeah Iâll admit I was freaked out at the beginning. So I learned about models, used them, and got familiar with them. Now Iâm less freaked out and more âoh my god so many people are going to get scammed/trickedâ.
Go on Facebook and youâll see itâs a good 50-70% AI garbage now. My favorite are âlog cabinâ and kitchen posts that are just images of them with blanket titles like âwish I lived hereâ with THOUSANDS of comments of people saying âYESâ or âitâs so beautifulâ. Of course it is it has no supports! The cabinets are held up by nothing! There are 9 kinds of lanterns and most are floating. Jesus people are not ready for it.
The âWilla Wonka Experienceâ event comes to mind. The images on the website were so obviously AI-generated, but people still coughed up ÂŁ35 a ticket to take their kids to it, and were then angry that the âeventâ was an empty warehouse with a couple of plastic props and three actors trying to improvise because the script theyâd been given was AI-generated gibberish. Straight up scam.
I feel this is all just a scam, trying to drive the value of AI stocks. Noone in the media seems to talk about the hallucination problem, the problem with limited data for new models (Habsburg-AI), the energy restrictions etc.
Itâs all uncritical believe that âAIâ will just become smart eventually.
This technology is built upon a hype, it is nothing more than that. There are limitations, and they reached them.
I think the issue is not wether itâs sentient or not, itâs how much agency you give it to control stuff.
Even before the AI craze this was an issue. Imagine if you were to create an automatic turret that kills living beings on sight, you would have to make sure you add a kill switch or you yourself wouldnât be able to turn it off anymore without getting shot.
The scary part is that the more complex and adaptive these systems become, the more difficult it can be to stop them once they are in autonomous mode. I think large language models are just another step in that complexity.
An atomic bomb doesnât pass a Turing test, but itâs a fucking scary thing nonetheless.
averyminya@beehaw.org
on 05 Jun 09:12
nextcollapse
Energy restrictions actually could be pretty easily worked around using analog converting methods. Otherwise I agree completely though, and whatâs the point of using energy on useless tools. Thereâs so many great things that AI is and can be used for, but of course like anything exploitable whatever is âfor the peopleâ is some amalgamation of extracting our dollars.
The funny part to me is that like mentioned âbeautifulâ AI cabins that are clearly fake â thereâs this weird dichotomy of people just not caring/too ignorant to notice the poor details, but at the same time so many generative AI tools are specifically being used to remove imperfection during the editing process. And that in itself is something thatâs too bad, Iâm definitely guilty of aiming for âthe perfect compositionâ but sometimes nature and timing forces your hand which makes the piece ephemeral in a unique way. Shadows are going to exist, background subjects are going to exist.
The current state of marketed AI is selling the promise of perfection, something thatâs been getting sold for years already. Just now itâs far easier to pump out scam material with these tools, something that gets easier with each advancement in these sorts of technologies, and now with more environmental harm than just a victim of a predator.
It really sucks being an optimist sometimes.
darkphotonstudio@beehaw.org
on 05 Jun 09:50
nextcollapse
It could be only hype. But I donât entirely agree. Personally, I believe we are only a few years away from AGI. Will it come from OpenAI and LLMs? Maybe, but it will likely come from something completely different. Like it or not, we are within spitting distance of a true Artificial Intelligence, and it will shake the foundations of the world.
Habsburg-AI? Do you have an idea on how much you made me laugh in real life with this expression??? Itâs just⌠perfect! Model degeneration is a lot like what happened with the Habsburg familyâs genetic pool.
When it comes to hallucinations in general, I got another analogy: someone trying to use a screwdriver with nails, failing, and calling it a hallucination. In other words I donât think that the models are misbehaving, theyâre simply behaving as expected, and that any âimprovementâ in this regard is basically a band-aid being added to humans to a procedure that doesnât yield a lot of useful outputs to begin with.
And that reinforces the point from your last paragraph - those people genuinely believe that, if you feed enough data into a L"L"M, itâll âmagicallyâ become smart. It wonât, just like 70kg of bees wonât âmagicallyâ think as well as a human being would. The underlying process is âdumbâ.
I am glad you liked it. Canât take the credit for this one though, I first heard it from Ed Zitron in his podcast âBetter Offlineâ. Highly recommend.
starman@programming.dev
on 05 Jun 08:17
nextcollapse
OpenAI Insider
Ah, what a reliable and unbiased source
darkphotonstudio@beehaw.org
on 05 Jun 09:33
nextcollapse
I believe much of our paranoia concerning ai stems from our fear that something will come along and treat us like we treat all the other life on this planet. Which is bitterly ironic considering our propensity for slaughtering each other on a massive scale. The only danger to humanity is humans.
If humanity is doomed, it will be our own stupid fault, not AI.
darkphotonstudio@beehaw.org
on 05 Jun 12:47
collapse
True. But we are still talking about what is essentially an alien mind. Even if it can do a good impression of a human intelligence, doesnât mean it is a human mind. It wonât have billions of years of evolution and thousands of years of civilization and development.
I think much of it comes from âfuturologistsâ spending too much time smelling each othersâ farts. These AI guys think so very much of themselves.
Hazzia@discuss.tchncs.de
on 05 Jun 10:58
nextcollapse
Itâs crazy how little experts like these think of humanity, or just underestimate our tollerance and adaptability to weird shit. People used to talk about how âif we ever learned UFOs were a real phenomena, there would be global mayhem!â because peopleâs world views would collapse and theyâd riot, or whatever. After getting a few articles the past few years since that first NY Times article, Iâve basically not heard anyone really caring (who didnât already seem to be into them before, anyway). Hell, we had a legitimate attempt to overthrow our own government, and the large majority of our population just kept on with their lives.
The same AI experts 10 years ago would have thought the AI we have right now would have caused societal collapse.
darkphotonstudio@beehaw.org
on 05 Jun 12:42
collapse
Idk about societal collapse, but think about the amount of damage the World Wide Web and social media has and continues to do. Look at the mess cars have made of cities around the world over the course of a century. Just because it doesnât happen overnight, doesnât mean serious problems canât occur. I think we have 10 years before the labour market is totally upended, with or without real AGI. Even narrow AI is capable of fucking things up on a scale no one wants to admit.
darkphotonstudio@beehaw.org
on 05 Jun 12:56
collapse
Agreed, partially. However, the âtechbrosâ in charge, for the most part, arenât the researchers. There are futurologists who are real scientists and researchers. Dismissing them smacks of the anti-science knuckleheads ignoring warnings about the dangers of not wearing masks and getting vaccines during the pandemic. Not everyone interested in the future is a techbro.
âFuturologistâ is a self-appointed honorific that people who fancy themselves âdeep thinkersâ while thinking of nothing more deeply than how deep they are. Itâs like declaring oneself an âintellectualâ.
Iâm sorry, but this is a really dumb take that borders on climate change denial logic. A sufficiently large comet is an existential threat to humanity. You seem to have this optimistic view that humanity is invincible against any threat but itself, and I do not think that belief is justified.
People are right to be very skeptical about OpenAI and âtechbros.â But I fear this skepticism has turned into outright denial of the genuine risks posed by AGI.
I find myself exhausted by this binary partitioning of discourse surrounding AI. Apparently you have to either be a cult member who worships the coming god of the singularity, or think that AI is either impossible or incapable of posing a serious threat.
darkphotonstudio@beehaw.org
on 05 Jun 16:20
collapse
You seem to have this optimistic view that humanity is invincible against any threat but itself
I didnât say that. Youâre making assumptions. However, I donât take AGI as a serious risk, not directly anyway. AGI is a big question mark at this time and hardly comparable to a giant comet or pandemic, of which we have experience or solid scientific evidence. Could it be a threat? Yeah. Do I personally think so? No. Our reaction to and exploitation of will likely do far more harm than any direct action by an AGI.
Wake me up when nixpkgs issues decline significantly from 5k+ due to AI.
2xsaiko@discuss.tchncs.de
on 05 Jun 11:27
nextcollapse
I mean I give it a 100% chance if they are allowed to keep going like this considering the enormous energy and water consumption, essentially slave labor to classify data for training because itâs such a huge amount that it would never be financially viable to fairly pay people, and end result which is to fill the internet with garbage.
You really donât need to be an insider to see that.
May I be blunt? I estimate that 70% of all OpenAI and 70% of all âinsidersâ are full of crap.
What people are calling nowadays âAIâ is not a magic solution for everything. It is not an existential threat either. The main risks that I see associated with it are:
Assumptive people taking LLM output for granted, to disastrous outcomes. Think on âyes, you can safely mix bleach and ammoniaâ tier (note: made up example).
Supply and demand. Generative models have awful output, but sometimes âawfulâ = âgood enoughâ.
Heavy increase in energy and resources consumption.
None of those issues was created by machine âlearningâ, itâs just that it synergises with them.
BarryZuckerkorn@beehaw.org
on 05 Jun 14:01
nextcollapse
Your scenario 1 is the actual danger. Itâs not that AI will outsmart us and kill us. Itâs that AI will trick us into trusting them with more responsibility than the AI can responsibly handle, to disastrous results.
It could be small scale, low stakes stuff, like an AI designing a menu that humans blindly cook. Or it could be higher stakes stuff that actually does things like affect election results, crashes financial markets, causes a military to target the wrong house, etc. The danger has always been that humans will act on the information provided by a malfunctioning AI, not that AI and technology will be a closed loop with no humans involved.
Yup, it is a real risk. But on a lighter side, itâs a risk that we [humanity] have been fighting against since forever - the possibility of some of us causing harm to the others not due to malice, but out of assumptiveness and similar character flaws. (In this case: âI assume that the AI is reliable enough for this task.â)
Iâm reading your comment as â[AI is] Not yet [an existential threat], anywayâ. If thatâs inaccurate, please clarify, OK?
With that reading in mind: I donât think that the current developments in machine âlearningâ lead towards the direction of some hypothetical system that would be an existential threat. The closest to that would be the subset of generative models, that looks like a tech dead end - sure, it might see some applications, but I donât think that itâll progress much past the current state.
In other words I believe that the AI that would be an existential threat would be nothing like whatâs being created and overhyped now.
Yeah, the short-term outlook doesnât look too dangerous right now. LLMs can do a lot of things we thought wouldnât happen for a long time, but they still have major issues and are running out of easy scalability.
That being said, thereâs a lot of different training schemes or integrations with classical algorithms that could be tried. ChatGPT knows a scary amount of stuff (inb4 Chinese room), it just doesnât have any incentive to use it except to mimic human-generated text. Iâm not saying itâs going to happen, but I think itâs premature to write off the possibility of an AI with complex planning capabilities in the next decade or so.
I donât think that a different training scheme or integrating it with already existing algos would be enough. Youâd need a structural change.
Iâll use a silly illustration for that; itâs somewhat long so Iâll put it inside spoilers. (Feel free to ignore it though - itâs just an illustration, the main claim is outside the spoilers tag.)
The Mad Librarian and the Good Boi
Letâs say that youâre a librarian. And you have lots of books to sort out. So you want to teach a dog to sort books for you. Starting by sci-fi and geography books.
So you set up the training environment: a table with a sci-fi and a geography books. And you give your dog a treat every time that he puts the ball over the sci-fi book.
At the start, the dog doesnât do it. But then as you train him, heâs able to do it perfectly. Great! Does the dog now recognise sci-fi and geography books? You test this out, by switching the placement of the books, and asking the dog to perform the same task; now heâs putting the ball over the history book. Nope - he doesnât know how to tell sci-fi and geography books apart, you were âleakingâ the answer by the placement of the books.
Now you repeat the training with a random position for the books. Eventually after a lot of training the dog is able to put the ball over the sci-fi book, regardless of position. Now the dog recognises sci-fi books, right? Nope - heâs identifying books by the smell.
To fix that you try again, with new versions of the books. Now heâs identifying the colour; the geography book has the same grey/purple hue as grass (from a dog PoV), the sci book is black like the neighbourâs cat. The dog would happily put the ball over the neighbourâs cat and ask âwhereâs my treat, human???â if the cat allowed it.
Needs more books. You assemble a plethora of geo and sci-fi books. Since typically tend to be dark, and the geo books tend to have nature on their covers, the dog is able to place the ball over the sci-fi books 70% of the time. Eventually you give up and say that the 30% error is the dog âhallucinatingâ.
We might argue that, by now, the dog should be âjust a step awayâ from recognising books by topic. But weâre just fooling ourselves, the dog is finding a bunch of orthogonal (like the smell) and diagonal (like the colour) patterns. What the dog is doing is still somewhat useful, but it wonât go much past that.
And, even if you and the dog lived forever (denying St. Peter the chance to tell him âyou werenât a good boy. You were the best boy.â), and spend most of your time with that training routine, his little brain wonât be able to create the associations necessary to actually identify a book by the topic, such as the content.
I think that what happens with LLMs is a lot like that. With a key difference - dogs are considerably smarter than even state-of-art LLMs, even if theyâre unable to speak.
At the end of the day LLMs are complex algorithms associating pieces of words, based on statistical inference. This is useful, and you might even see some emergent behaviour - but they donât âknowâ stuff, and this is trivial to show, as they fail to perform simple logic even with pieces of info that theyâre able to reliably output. Different training and/or algo might change the info that itâs outputting, but they wonât âmagicallyâ go past that.
I have this debate so often, Iâm going to try something a bit different. Why donât we start by laying down how LLMs do work. If you had to explain as fully as you could the algorithm weâre talking about, how would you do it?
The Chinese room experiment is about the internal process; if it thinks or not, if it simulates or knows, with a machine that passes the Turing test. My example clearly does not bother with all that, what matters here is the ability to perform the goal task.
As such, no, my example is not the Chinese room. Iâm highlighting something else - that the dog will keep doing spurious associations, that will affect the outcome. Is this clear now?
Why this matters: in the topic of existential threat, itâs pretty much irrelevant if the AI in question âthinksâ or not. What matters is its usage in situations where it would âdecideâ something.
I have this debate so often, Iâm going to try something a bit different. Why donât we start by laying down how LLMs do work. If you had to explain as full as you could the algorithm weâre talking about, how would you do it?
Why donât we do the following instead: I play along your inversion of the burden of the proof once you show how it would be relevant to your implicit claim that AI [will|might] become an existential threat (from â[AI is] Not yet [an existential threat], anywayâ)?
Also worth noting that you outright ignored the main claim outside spoilers tag.
Yeah, sorry, I donât want to invert burden of proof - or at least, I donât want to ask anything unreasonable of you.
Okay, letâs talk just about the performance we measure - it wasnât clear to me thatâs what you mean from what you wrote. Natural language is inherently imprecise, so no bitterness intended, but in particular thatâs how I read the section outside of the spoiler tag.
By some measures, it can do quite a bit of novel logic. I recall it drawing a unicorn using text commends in one published test, for example, which correctly had a horn, body and four legs. That requires combining concepts in a way that almost certainly isnât directly in the training data, so itâs fair to say itâs not a mere search engine. Then again, sometimes it just doesnât do what itâs asked, for example when adding two numbers - it will give a plausible looking result, but thatâs all.
So, we have a blackbox, and weâre trying to decide if it could become an existential threat. Do we agree a computer just as smart as us probably would be? If so, that reduces to whether the blackbox could be just as smart as us eventually. Up until now, thereâs been great reasons to say no, even about blackbox software. I know clippy could never have done it, because thereâs forms of reasoning classical algorithms just couldnât do, despite great effort - it doesnât matter if clippy is closed source, because it was a classical algorithm.
On the other hand, what neural nets canât do is a total unknown. GPT-n wonât add numbers directly, but it is able to correctly preform the steps, which you can show by putting it in a chain-of-thought framework. It just âchoosesâ not to, because thatâs not how it was trained. GPT-n canât organise a faction that threatens human autonomy, but we donât know if thatâs because it doesnât know the steps, or because of the lack of memory and cost function to make it do that.
Itâs a blackbox, thereâs no known limits on what it could do, and itâs certain to be improved on quickly at least in some way. For this reason, I think it might become an existential threat, in some future iteration.
I also apologise for the tone. That was a knee-jerk reaction from my part; my bad.
(In my own defence, Iâve been discussing this topic with tech bros, and they rather consistently invert the burden of the proof. Often to evoke Brandoliniâs Law. You probably know which âtypesâ Iâm talking about.)
On-topic. Given that âsmartâ is still an internal attribute of the blackbox, perhaps we could gauge better if those models are likely to become an existential threat by 1) what they output now, 2) what they might output in the future, and 3) what we [people] might do with it.
Itâs also easier to work with your example productively this way. Hereâs a counterpoint:
The prompt asks for eight legs, and only one pic was able to output it correctly; two ignored it, and one of the pics shows ten legs. Thatâs 25% accuracy.
I believe that the key difference between âyourâ unicorn and âmyâ eight-legged dragon is in the training data. Unicorns are fictitious but common in popular culture, so there are lots of unicorn pictures to feed the model with; while eight-legged dragons are something that I made up, so thereâs no direct reference, even if you could logically combine other references (as a spider + a dragon).
So their output is strongly limited by the training data, and it doesnât seem to follow some strong logic. What they might output in the future depends on what we add in; the potential for decision taking is rather weak, as they wouldnât be able to deal with unpredictable situations. And thus their ability to go rogue.
[Note: I repeated the test with a horse instead of a dragon, within the same chat. The output was slightly less bad, confirming my hypothesis - because pics of eight-legged horses exist due to the Sleipnir.]
Neural nets
Neural networks are a different can of worms for me, as I think that theyâll outlive LLMs by a huge margin, even if the current LLMs use them. However, how theyâll be used is likely considerably different.
For example, current state-of-art LLMs are coded with some âsemanticâ supplementation near the embedding, added almost like an afterthought. However, semantics should play a central role in the design of the transformer - because what matters is not the word itself, but what it conveys.
That would be considerably closer to a general intelligence than to modern LLMs - because youâre effectively demoting language processing to input/output, that might as well be subbed with something else, like pictures. In this situation I believe that the output would be far more accurate, and it could theoretically handle novel situations better. Then we could have some concerns about AI being an existential threat - because people would use this AI for decision taking, and it might output decisions that go terribly right, as in that âpaperclip factoryâ thought experiment.
The fact that we donât see developments in this direction yet shows, for me, that itâs easier said than done, and weâre really far from that.
To be clear, I wasnât talking about an actual picture generating model. It was raw GPT trained on just text, asked to write instructions for a paint program to output a unicorn. Thatâs more convincing because itâs multiple steps away from the basic task it was trained on. Here, I found the paper, it starts with unicorns and then starts exploring other images, and eventually they delve into way more detail than I actually read. Thereâs a video talk that goes with it.
The trick with trying to âmakeâ an AI do semantics, is that we donât know what semantics is, exactly. I mean, thatâs kind of what we started out with (remember the old pattern-matching chatbots?) but simpler approaches often worked better. Even the Transformer block itself is barely more complicated than a plain feed-forward network. I donât think thatâs so much because neural nets are more efficient (they really arenât) but because we were looking for an answer to a question we didnât have.
I think the challenge going forwards is freeing all that know-how from the black box weâve put it in, somehow. Assuming we do want to mess with something so dangerous if handled carelessly.
I think any prediction based on a âsingularityâ neglects to consider the physical limitations, and just how long the journey towards significant amounts of AGI would be.
The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.
If we consider a current GPU, e.g. the 12 GB GFX 3060, it can hold about 24 billion parameters at 4 bit quantisation (in reality a fair few less), and uses 180 W of power. So that means an AGI might use 750 kW of power to operate. A super-intelligent machine might use more. That is a farm of 2500 300W solar panels, while the sun is shining, just for the equivalent of one person.
Now to pose a real threat against the billions of humans, youâd need more than one personâs worth of intelligence. Maybe an army equivalent to 1,000 people, powered by 8,333,333 GPUs and 2,500,000 solar panels.
That is not going to materialise out of the air too quickly.
In practice, as we get closer to an AGI or ASI, there will be multiple separate deployments of similar sizes (within an order of magnitude), and they wonât be aligned to each other - some systems will be adversaries of any system executing a plan to destroy humanity, and will be aligned to protect against harm (AI technologies are already widely used for threat analysis). So youâd have a bunch of malicious systems, and a bunch of defender systems, going head to head.
The real AI risks, which I think many of the people ranting about singularities want to obscure, are:
An oligopoly of companies get dominance over the AI space, and perpetuates a ârich get richerâ cycle, accumulating wealth and power to the detriment of society. OpenAI, Microsoft, Google and AWS are probably all battling for that. Open models is the way to battle that.
People can no longer trust their eyes when it comes to media; existing problems of fake news, deepfakes, and so on become so severe that they undermine any sense of truth. That might fundamentally shift society, but I think weâll adjust.
Doing bad stuff becomes easier. That might be scamming, but at the more extreme end it might be designing weapons of mass destruction. On the positive side, AI can help defenders too.
Poor quality AI might be relied on to make decisions that affect peopleâs lives. Best handled through the same regulatory approaches that prevent companies and governments doing the same with simple flow charts / scripts.
darkphotonstudio@beehaw.org
on 05 Jun 12:37
nextcollapse
I think youâre right on the money when it comes to the real dangers, especially your first bullet point. I donât necessarily agree with your napkin maths. If the virtual neurons are used in a more efficient way, that could make up for a lot versus human neuron count.
technocrit@lemmy.dbzer0.com
on 05 Jun 14:18
nextcollapse
So youâd have a bunch of malicious systems, and a bunch of defender systems, going head to head.
Let me guess⌠USA is defender and Russia/China is malicious? Seriously though who is going to be running the malicious machines trying to âdestroy humanityâ? If youâre talking about capitalism destroying the planet, this has already been happening without AI. Otherwise this seems like just another singularity fantasy.
The fears people who like to talk about the singularity like to propose is that there will be one ârogueâ misaligned ASI that progressively takes over everything - i.e. all the AI in the world works against all the people.
My point is that more likely is there will be lots of ASI or AGI systems, not aligned to each other, most on the side of the humans.
CanadaPlus@lemmy.sdf.org
on 05 Jun 19:21
nextcollapse
The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.
Yeah, but a lot of those do things unrelated to higher reasoning. A small monkey is smarter than a moose, despite the moose obviously having way more synapses.
I donât think you can rely on this kind of argument so heavily. A brain isnât a muscle.
This is something I think needs to be interrogated. None of these models, even the supposedly open ones are actually âopenâ or even currently âopenableâ. We can know the exact weights for every single parameter, the code used to construct it, and the data used to train it, and that information gives us basically no insight into its behavior. We simply donât have the tools to actually âreadâ a machine learning model in the way you would an open source program, the tech produces black boxes as a consequence of its structure. We can learn about how they work, for sure, but the corps making these things arenât that far ahead of the public when it comes to understanding what theyâre doing or how to change their behavior.
technocrit@lemmy.dbzer0.com
on 05 Jun 14:15
nextcollapse
OpenAI InsiderInvestor Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity
I think when people think of the danger of AI, they think of something like Skynet or the Matrix. It either hijacks technology or builds it itself and destroys everything.
But what seems much more likely, given what weâve seen already, is corporations pushing AI that they know isnât really capable of what they say it is and everyone going along with it because of money and technological ignorance.
You can already see the warning signs. Cars that run pedestrians over, search engines that tell people to eat glue, customer support AI that have no idea what theyâre talking about, endless fake reviews and articles. Itâs already hurt people, but so far only on a small scale.
But the profitablity of pushing AI early, especially if youâre just pumping and dumping a company for quarterly profits, is massive. The more that gets normalized, the greater the chance one of them gets put in charge of something important, or becomes a barrier to something important.
Thatâs whatâs scary about it. It isnât AI itself, itâs AI as a vector for corporate recklessness.
newtraditionalists@mastodon.social
on 05 Jun 18:56
nextcollapse
@millie@floofloof this is so well articulated I can't stand it. I want to have it printed out and hand it to anyone who asks me anything about AI. Thank you for this!
I donât think your assumption holds. Corporations are not, as a rule, incompetent - in fact, they tend to be really competent at squeezing profit out of anything. They are misaligned, which is much more dangerous.
I think the more likely scenario is also more grim:
AI actually does continue to advance and gets better and better displacing more and more jobs. It doesnât happen instantly so barely anything gets done. Some half-assed regulations are attempted but predictably end up either not doing anything, postponing the inevitable by a small amount of time, or causing more damage than doing nothing would. Corporations grow in power, build their own autonomous armies, and exert pressure on governments to leave them unregulated. Eventually all resources are managed by and for few rich assholes, while the rest of the world tries to survive without angering them.
If weâre unlucky, some of those corporations end up being managed by a maximizer AGI with no human supervision and then the Earth pretty much becomes an abstract game with a scoreboard, where money (or whatever is the equivalent) is the score.
Limitations of human body act as an important balancing factor in keeping democracies from collapsing. No human can rule a nation alone - they need armies and workers. Intellectual work is especially important (unless you have some other source of income to outsource it), but it requires good living conditions to develop and sustain. Once intellectual work is automated, infrastructure like schools, roads, hospitals, housing cease to be important for the rulers - they can give those to the army as a reward and make the rest of the population do manual work. Then if manual work and policing through force become automated, there is no need even for those slivers of decency.
Once a single human can rule a nation, there is enough rich psychopaths for one of them to attempt it.
Ai doesnât get better. Its completely dependent on computing power. They are dumping all the power into it they can, and it sucks ass. The larger the dataset the more power it takes to search it all. Your imagination is infinite, computing power is not. you canât keep throwing electricity at a problem. It was pushed out because there was a bunch of excess computing power after crypto crashed, or semi stabilized. Its an excuse to lay off a bunch of workers after covid who were gonna get laid off anyway. Managers were like sweet Iâll trim some excess employees and replace them with ai! Wrong. Its a grift. It might hang on for a while but policy experts are already looking at the amount of resources being thrown at it and getting weary. The technological ignorance you are responding to, thatâs you. You donât know how the economy works and you donât know how ai works so youâre just believing all this rokuâs basilisk nonsense out of an overactive imagination. Its not an insult lots of people are falling for it, ai companies are straight up lying, the media is stretching the truth of it to the point of breaking. But Iâm telling you, donât be a sucker. Until thereâs a breakthrough that fixes the resource consumption issue by like orders of magnitude, I wouldnât worry too much about Ellisonâs AM becoming a reality
I find it rather disingenuous to summarize the previous posterâs comment as a âRokoâs basiliskâscenario. Intentionally picking a ridiculous argument to characterize the other side of the debate. I think they were pretty clear about actual threats (some more plausible than others, IMO).
I also find it interesting that you so confidently state that âAI doesnât get better,â under the assumption that our current deep learning architectures are the only way to build AI systems.
Iâm going to make a pretty bold statement: AGI is inevitable, assuming human technological advancement isnât halted altogether. Why can I so confidently state this? Because we already have GI without the A. To say that it is impossible is to me equivalent to arguing that there is something magical about the human brain that technology could never replicate. But brains arenât magic; theyâre incredibly sophisticated electrochemical machines. It is only a matter of time before we find a way to replicate âgeneral intelligence,â whether itâs through new algorithms, new computing architectures, or even synthetic biology.
I wasnât debating you. I have debates all day with people who actually know what theyâre talking about, I donât come to the internet for that. I was just looking out for you, and anyone else who might fall for this. There is a hard physical limit. Iâm not saying the things youâre describing are technically impossible, Iâm saying they are technically impossible with this version of the tech. Slapping a predictive text generator on a giant database , its too expensive, and it doesnât work. Its not a debate, its science. And not the fake shit run by corporate interests, the real thing based on math.
Thereâs gonna be a heatwave this week in the Western US, and there are almost constant deadly heatwaves in many parts of the world from burning fossil fuels. But we canât stop producing electricity to run these scam machines because someone might lose money.
Your opening sentence is demonstrably false. GTP-2 was a shitpost generator, while GPT-4 output is hard to distinguish from a genuine human. Dall-E 3 is better than its predecessors at pretty much everything. Yes, generative AI right now is getting better mostly by feeding it more training data and making it bigger. But it keeps getting better and thereâs no cutoff in sight.
That you can straight-up comment âAI doesnât get betterâ at a tech literate sub and not be called out is honestly staggering.
That you can straight-up comment âAI doesnât get betterâ at a tech literate sub and not be called out is honestly staggering.
I actually donât think it is because, as I alluded to in another comment in this thread, so many people are still completely in the dark on generative AI - even in general technology-themed areas of the internet. Their only understanding of it comes from reading the comments of morons (because none of these people ever actually read the linked article) who regurgitate the same old âbig tech is only about hype, techbros are all charlatans from the capitalist eliteâ lines for karma/retweets/likes without ever actually taking the time to hear what people working within the field (i.e. experts) are saying. People underestimate the capabilities of AI because it fits their political world view, and in doing so are sitting ducks when it comes to the very real threats it poses.
The difference between gpt-3 and gpt-4 is number of parameters, I.e. processing power. I donât know what the difference between 2 and 4 is, maybe there were some algorithmic improvements. At this point, I donât know what algorithmic improvements are going to net efficiencies in the âorders of magnitudeâ that would be necessary to yield the kind of results to see noticeable improvement in the technology. Like the difference between 3 and 4 is millions of parameters vs billions of parameters. Is a chatgpt 5 going to have trillions of parameters? No.
Tech literate people are apparently just as susceptible to this grift, maybe more susceptible from what little I understand about behavioral economics. You can poke holes in my argument all you want, this isnât a research paper.
It isnât AI itself, itâs AI as a vector for corporate recklessness.
This. 1000% this. Many of Issac Asimov novels warned about this sort of thing too; as did any number of novels inspired by Asimov.
Itâs not that we didnât provide the AI with rules. Itâs not that the AI isnât trying not to harm people. Itâs that humans, being the clever little things we are, are far more adept at deceiving and tricking AI into saying things and using that to justify actions to gain benefit.
âŚUnderstandably this is how that is being done. By selling AI that isnât as intelligent as it is being trumpeted as. As long as these corporate shysters can organize a team to crap out a âMinimally Viable Productâ theyâre hailed as miracle workers and get paid fucking millions.
Ideally all of this should violate the many, many laws of many, many civilized nationsâŚbut theyâve done some black magic with that too; by attacking and weakening laws and institutions that can hold them liable for this and even completely ripping out or neutering laws that could cause them to be held accountable by misusing their influence.
Yes, itâs very concerning and frustrating that more people donât understand the risks posed by AI. Itâs not about AI becoming sentient and destroying humanity, itâs about humanity using AI to destroy itself. I think this fundamental misunderstanding of the problem is the reason why you get so many of these dismissive âAI is just techbro hypeâ comments. So many people are genuinely clueless about a) how manipulative this technology already is and b) the rate at which it is advancing.
Calling LLMs, âAIâ is one of the most genius marketing moves I have ever seen. Itâs also the reason for the problems you mention.
I am guessing that a lot of people are just thinking, âWell AI is just not that smart⌠yet! It will learn more and get smarter and then, ah ha! Skynet!â It is a fundamental misunderstanding of what LLMs are doing. It may be a partial emulation of intelligence. Like humans, it uses its prior memory and experiences (data) to guess what an answer to a new question would look like. But unlike human intelligence, it doesnât have any idea what it is saying, actually means.
MayonnaiseArch@beehaw.org
on 07 Jun 05:48
collapse
He was interviewed after his septum replacement surgery, got a brand new teflon one
threaded - newest
đ¤ Iâm a bot that provides automatic summaries for articles:
Click here to see the summary
In an interview with The New York Times, former OpenAI governance researcher Daniel Kokotajlo accused the company of ignoring the monumental risks posed by artificial general intelligence (AGI) because its decision-makers are so enthralled with its possibilities. Kokotajloâs spiciest claim to the newspaper, though, was that the chance AI will wreck humanity is around 70 percent â odds you wouldnât accept for any major life event, but that OpenAI and its ilk are barreling ahead with anyway. The 31-year-old Kokotajlo told the NYT that after he joined OpenAI in 2022 and was asked to forecast the technologyâs progress, he became convinced not only that the industry would achieve AGI by the year 2027, but that there was a great probability that it would catastrophically harm or even destroy humanity. Kokotajlo became so convinced that AI posed massive risks to humanity that eventually, he personally urged OpenAI CEO Sam Altman that the company needed to âpivot to safetyâ and spend more time implementing guardrails to reign in the technology rather than continue making it smarter. Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had âlost confidence that OpenAI will behave responsiblyâ as it continues trying to build near-human-level AI. âWeâre proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,â the company said in a statement after the publication of this piece. â Saved 56% of original text.
How did he calculate the 70% chance? Without an explanation this opinion is as much important as a Reddit post. Itâs just marketing fluff talk, so people talk about AI and in return a small percentage get converted into people interested into AI. Letâs call it clickbait talk.
First he talks about high chance that humans get destroyed by AI. Follows with a prediction it would achieve AGI in 2027 (only 3 years from now). No. Just no. There is a loong way to get general intelligence. But isnât he trying to sell you why AI is great? He follows with:
Ah yes, he does.
Insider from OpenAI PR department speaks out!
Maybe they asked ChatGPT?
ChatGPT says 1-5%, but I told it to give me nothing but a percentage and it gave me a couple of paragraphs like a kid trying to distract from the answer by surrounding it with bullshit. I think itâs onto usâŚ
(I kid. I attribute no sentience or intelligence to ChatGPT.)
This is a horoscope trick. They can always say AI destroyed humanity.
Trump won in 2016 and there was Cambridge Analytica doing data analysis: AI technology destroyed humanity!
Israel used AI-guided missiles to attack Gaza: AI destroyed humanity!
Whatever. You can point at whatever catastrophe and there is always AI behind because already in 2014 AI is a basic technology used everywhere.
The person who predicted 70% chance of AI doom is Daniel Kokotajlo, who quit OpenAI because of it not taking this seriously enough. The quote you have there is a statement by OpenAI, not by Kokotajlo, this is all explicit in the article. The idea that this guy is motivated by trying to do marketing for OpenAI is just wrong, the article links to some of his extensive commentary where he is advocating for more government oversight specifically of OpenAI and other big companies instead of the favorable regulations that company is pushing for. The idea that his belief in existential risk is disingenuous also doesnât make sense, itâs clear that he and other people concerned about this take it very seriously.
This fear mongering is just beneficial to Altman. If his product is powerful enough to be a threat to humanity then it is also powerful enough to be capable of many useful things, things it has not proven itself to be capable of. Ironically spreading fear about its capabilities will likely raise investment, so if you actually are afraid of openai somehow arriving at agi that is dangerous then you should really be trying to convince people of its lack of real utility.
The guy complaining left the company:
I donât think that he stands to benefit.
He also didnât say that OpenAI was on the brink of having something like this either.
Like, I donât think all the fighting at OpenAI and people being ejected and such is all a massive choreographed performance. I think that there have been people who really strongly disagree with each other.
I absolutely think that AGI has the potential to post existential risks to humanity. I just donât think that OpenAI is anywhere near building anything capable of that. But if youâre trying to build towards such a thing, the risks are something that I think a lot of people would keep in mind.
I think that human level AI is very much technically possible. We can do it ourselves, and we have hardware with superior storage and compute capacity. The problem we havenât solved is the software side. And I can very easily believe that we may get there not all that far in the future. Years or decades, not centuries down the road.
I didnât think it was a choreographed publicity stunt. I just know Altman has used AI fear in the past to keep people from asking rational questions like âWhat can this actually do?â He obviously stands to gain from people thinking they are on the verge of agi. And someone looking for a new job in the field also has to gain from it.
As for the software thing, if itâs done by someone it wonât be openai and megacorporations following in its footsteps. They seem insistent at throwing more data (of diminishing quality) and more compute (an impractical amount) at the same style of models hoping theyâll reach some kind of tipping point.
The energy demand of AI will harm humanity, because we keep feeding it huge amounts of energy produced by burning fossile fuels.
I just realized something: since most people have no idea what AI is, it could easily be used to scam people. I think that will be itâs main function originally.
Like the average person does not have access to real time stock data. You could make a fake AI program that pretends to be a trading algorithm and makes a ton of pretend money as the mark watches. The data would be 100% real and verifiable, just picked a few seconds after the the fact.
Since most people care a lot about money, this will be some of the first widespread applications of real time AI. Just tricking people out of money.
Yeah Iâll admit I was freaked out at the beginning. So I learned about models, used them, and got familiar with them. Now Iâm less freaked out and more âoh my god so many people are going to get scammed/trickedâ.
Go on Facebook and youâll see itâs a good 50-70% AI garbage now. My favorite are âlog cabinâ and kitchen posts that are just images of them with blanket titles like âwish I lived hereâ with THOUSANDS of comments of people saying âYESâ or âitâs so beautifulâ. Of course it is it has no supports! The cabinets are held up by nothing! There are 9 kinds of lanterns and most are floating. Jesus people are not ready for it.
The âWilla Wonka Experienceâ event comes to mind. The images on the website were so obviously AI-generated, but people still coughed up ÂŁ35 a ticket to take their kids to it, and were then angry that the âeventâ was an empty warehouse with a couple of plastic props and three actors trying to improvise because the script theyâd been given was AI-generated gibberish. Straight up scam.
There are already cases of people pretending to be AI and people revealing dumb information revealing dumb info about themselves lol
I feel this is all just a scam, trying to drive the value of AI stocks. Noone in the media seems to talk about the hallucination problem, the problem with limited data for new models (Habsburg-AI), the energy restrictions etc.
Itâs all uncritical believe that âAIâ will just become smart eventually. This technology is built upon a hype, it is nothing more than that. There are limitations, and they reached them.
AI bros are just NFT bros with an actual product.
And these current LLMs arenât just gonna find sentience for themselves. Sure theyâll pass a Turing test but they arenât alive lol
I think the issue is not wether itâs sentient or not, itâs how much agency you give it to control stuff.
Even before the AI craze this was an issue. Imagine if you were to create an automatic turret that kills living beings on sight, you would have to make sure you add a kill switch or you yourself wouldnât be able to turn it off anymore without getting shot.
The scary part is that the more complex and adaptive these systems become, the more difficult it can be to stop them once they are in autonomous mode. I think large language models are just another step in that complexity.
An atomic bomb doesnât pass a Turing test, but itâs a fucking scary thing nonetheless.
Energy restrictions actually could be pretty easily worked around using analog converting methods. Otherwise I agree completely though, and whatâs the point of using energy on useless tools. Thereâs so many great things that AI is and can be used for, but of course like anything exploitable whatever is âfor the peopleâ is some amalgamation of extracting our dollars.
The funny part to me is that like mentioned âbeautifulâ AI cabins that are clearly fake â thereâs this weird dichotomy of people just not caring/too ignorant to notice the poor details, but at the same time so many generative AI tools are specifically being used to remove imperfection during the editing process. And that in itself is something thatâs too bad, Iâm definitely guilty of aiming for âthe perfect compositionâ but sometimes nature and timing forces your hand which makes the piece ephemeral in a unique way. Shadows are going to exist, background subjects are going to exist.
The current state of marketed AI is selling the promise of perfection, something thatâs been getting sold for years already. Just now itâs far easier to pump out scam material with these tools, something that gets easier with each advancement in these sorts of technologies, and now with more environmental harm than just a victim of a predator.
It really sucks being an optimist sometimes.
It could be only hype. But I donât entirely agree. Personally, I believe we are only a few years away from AGI. Will it come from OpenAI and LLMs? Maybe, but it will likely come from something completely different. Like it or not, we are within spitting distance of a true Artificial Intelligence, and it will shake the foundations of the world.
Habsburg-AI? Do you have an idea on how much you made me laugh in real life with this expression??? Itâs just⌠perfect! Model degeneration is a lot like what happened with the Habsburg familyâs genetic pool.
When it comes to hallucinations in general, I got another analogy: someone trying to use a screwdriver with nails, failing, and calling it a hallucination. In other words I donât think that the models are misbehaving, theyâre simply behaving as expected, and that any âimprovementâ in this regard is basically a band-aid being added to humans to a procedure that doesnât yield a lot of useful outputs to begin with.
And that reinforces the point from your last paragraph - those people genuinely believe that, if you feed enough data into a L"L"M, itâll âmagicallyâ become smart. It wonât, just like 70kg of bees wonât âmagicallyâ think as well as a human being would. The underlying process is âdumbâ.
I am glad you liked it. Canât take the credit for this one though, I first heard it from Ed Zitron in his podcast âBetter Offlineâ. Highly recommend.
Ah, what a reliable and unbiased source
I believe much of our paranoia concerning ai stems from our fear that something will come along and treat us like we treat all the other life on this planet. Which is bitterly ironic considering our propensity for slaughtering each other on a massive scale. The only danger to humanity is humans. If humanity is doomed, it will be our own stupid fault, not AI.
But if AI learns from usâŚ
True. But we are still talking about what is essentially an alien mind. Even if it can do a good impression of a human intelligence, doesnât mean it is a human mind. It wonât have billions of years of evolution and thousands of years of civilization and development.
I think much of it comes from âfuturologistsâ spending too much time smelling each othersâ farts. These AI guys think so very much of themselves.
Itâs crazy how little experts like these think of humanity, or just underestimate our tollerance and adaptability to weird shit. People used to talk about how âif we ever learned UFOs were a real phenomena, there would be global mayhem!â because peopleâs world views would collapse and theyâd riot, or whatever. After getting a few articles the past few years since that first NY Times article, Iâve basically not heard anyone really caring (who didnât already seem to be into them before, anyway). Hell, we had a legitimate attempt to overthrow our own government, and the large majority of our population just kept on with their lives.
The same AI experts 10 years ago would have thought the AI we have right now would have caused societal collapse.
Idk about societal collapse, but think about the amount of damage the World Wide Web and social media has and continues to do. Look at the mess cars have made of cities around the world over the course of a century. Just because it doesnât happen overnight, doesnât mean serious problems canât occur. I think we have 10 years before the labour market is totally upended, with or without real AGI. Even narrow AI is capable of fucking things up on a scale no one wants to admit.
Agreed, partially. However, the âtechbrosâ in charge, for the most part, arenât the researchers. There are futurologists who are real scientists and researchers. Dismissing them smacks of the anti-science knuckleheads ignoring warnings about the dangers of not wearing masks and getting vaccines during the pandemic. Not everyone interested in the future is a techbro.
âFuturologistâ is a self-appointed honorific that people who fancy themselves âdeep thinkersâ while thinking of nothing more deeply than how deep they are. Itâs like declaring oneself an âintellectualâ.
Iâm sorry, but this is a really dumb take that borders on climate change denial logic. A sufficiently large comet is an existential threat to humanity. You seem to have this optimistic view that humanity is invincible against any threat but itself, and I do not think that belief is justified.
People are right to be very skeptical about OpenAI and âtechbros.â But I fear this skepticism has turned into outright denial of the genuine risks posed by AGI.
I find myself exhausted by this binary partitioning of discourse surrounding AI. Apparently you have to either be a cult member who worships the coming god of the singularity, or think that AI is either impossible or incapable of posing a serious threat.
I didnât say that. Youâre making assumptions. However, I donât take AGI as a serious risk, not directly anyway. AGI is a big question mark at this time and hardly comparable to a giant comet or pandemic, of which we have experience or solid scientific evidence. Could it be a threat? Yeah. Do I personally think so? No. Our reaction to and exploitation of will likely do far more harm than any direct action by an AGI.
Wake me up when nixpkgs issues decline significantly from 5k+ due to AI.
I mean I give it a 100% chance if they are allowed to keep going like this considering the enormous energy and water consumption, essentially slave labor to classify data for training because itâs such a huge amount that it would never be financially viable to fairly pay people, and end result which is to fill the internet with garbage.
You really donât need to be an insider to see that.
When I think of AI ruining humanity, this is how I picture it
May I be blunt? I estimate that 70% of all OpenAI and 70% of all âinsidersâ are full of crap.
What people are calling nowadays âAIâ is not a magic solution for everything. It is not an existential threat either. The main risks that I see associated with it are:
None of those issues was created by machine âlearningâ, itâs just that it synergises with them.
Your scenario 1 is the actual danger. Itâs not that AI will outsmart us and kill us. Itâs that AI will trick us into trusting them with more responsibility than the AI can responsibly handle, to disastrous results.
It could be small scale, low stakes stuff, like an AI designing a menu that humans blindly cook. Or it could be higher stakes stuff that actually does things like affect election results, crashes financial markets, causes a military to target the wrong house, etc. The danger has always been that humans will act on the information provided by a malfunctioning AI, not that AI and technology will be a closed loop with no humans involved.
Yup, it is a real risk. But on a lighter side, itâs a risk that we [humanity] have been fighting against since forever - the possibility of some of us causing harm to the others not due to malice, but out of assumptiveness and similar character flaws. (In this case: âI assume that the AI is reliable enough for this task.â)
Not yet, anyway.
Iâm reading your comment as â[AI is] Not yet [an existential threat], anywayâ. If thatâs inaccurate, please clarify, OK?
With that reading in mind: I donât think that the current developments in machine âlearningâ lead towards the direction of some hypothetical system that would be an existential threat. The closest to that would be the subset of generative models, that looks like a tech dead end - sure, it might see some applications, but I donât think that itâll progress much past the current state.
In other words I believe that the AI that would be an existential threat would be nothing like whatâs being created and overhyped now.
Yeah, the short-term outlook doesnât look too dangerous right now. LLMs can do a lot of things we thought wouldnât happen for a long time, but they still have major issues and are running out of easy scalability.
That being said, thereâs a lot of different training schemes or integrations with classical algorithms that could be tried. ChatGPT knows a scary amount of stuff (inb4 Chinese room), it just doesnât have any incentive to use it except to mimic human-generated text. Iâm not saying itâs going to happen, but I think itâs premature to write off the possibility of an AI with complex planning capabilities in the next decade or so.
I donât think that a different training scheme or integrating it with already existing algos would be enough. Youâd need a structural change.
Iâll use a silly illustration for that; itâs somewhat long so Iâll put it inside spoilers. (Feel free to ignore it though - itâs just an illustration, the main claim is outside the spoilers tag.)
The Mad Librarian and the Good Boi
Letâs say that youâre a librarian. And you have lots of books to sort out. So you want to teach a dog to sort books for you. Starting by sci-fi and geography books. So you set up the training environment: a table with a sci-fi and a geography books. And you give your dog a treat every time that he puts the ball over the sci-fi book. At the start, the dog doesnât do it. But then as you train him, heâs able to do it perfectly. Great! Does the dog now recognise sci-fi and geography books? You test this out, by switching the placement of the books, and asking the dog to perform the same task; now heâs putting the ball over the history book. Nope - he doesnât know how to tell sci-fi and geography books apart, you were âleakingâ the answer by the placement of the books. Now you repeat the training with a random position for the books. Eventually after a lot of training the dog is able to put the ball over the sci-fi book, regardless of position. Now the dog recognises sci-fi books, right? Nope - heâs identifying books by the smell. To fix that you try again, with new versions of the books. Now heâs identifying the colour; the geography book has the same grey/purple hue as grass (from a dog PoV), the sci book is black like the neighbourâs cat. The dog would happily put the ball over the neighbourâs cat and ask âwhereâs my treat, human???â if the cat allowed it. Needs more books. You assemble a plethora of geo and sci-fi books. Since typically tend to be dark, and the geo books tend to have nature on their covers, the dog is able to place the ball over the sci-fi books 70% of the time. Eventually you give up and say that the 30% error is the dog âhallucinatingâ. We might argue that, by now, the dog should be âjust a step awayâ from recognising books by topic. But weâre just fooling ourselves, the dog is finding a bunch of orthogonal (like the smell) and diagonal (like the colour) patterns. What the dog is doing is still somewhat useful, but it wonât go much past that. And, even if you and the dog lived forever (denying St. Peter the chance to tell him âyou werenât a good boy. You were the best boy.â), and spend most of your time with that training routine, his little brain wonât be able to create the associations necessary to actually identify a book by the topic, such as the content. I think that what happens with LLMs is a lot like that. With a key difference - dogs are considerably smarter than even state-of-art LLMs, even if theyâre unable to speak.
At the end of the day LLMs are complex algorithms associating pieces of words, based on statistical inference. This is useful, and you might even see some emergent behaviour - but they donât âknowâ stuff, and this is trivial to show, as they fail to perform simple logic even with pieces of info that theyâre able to reliably output. Different training and/or algo might change the info that itâs outputting, but they wonât âmagicallyâ go past that.
Chinese room, called it. Just with a dog instead.
I have this debate so often, Iâm going to try something a bit different. Why donât we start by laying down how LLMs do work. If you had to explain as fully as you could the algorithm weâre talking about, how would you do it?
The Chinese room experiment is about the internal process; if it thinks or not, if it simulates or knows, with a machine that passes the Turing test. My example clearly does not bother with all that, what matters here is the ability to perform the goal task.
As such, no, my example is not the Chinese room. Iâm highlighting something else - that the dog will keep doing spurious associations, that will affect the outcome. Is this clear now?
Why this matters: in the topic of existential threat, itâs pretty much irrelevant if the AI in question âthinksâ or not. What matters is its usage in situations where it would âdecideâ something.
Why donât we do the following instead: I play along your inversion of the burden of the proof once you show how it would be relevant to your implicit claim that AI [will|might] become an existential threat (from â[AI is] Not yet [an existential threat], anywayâ)?
Also worth noting that you outright ignored the main claim outside spoilers tag.
Yeah, sorry, I donât want to invert burden of proof - or at least, I donât want to ask anything unreasonable of you.
Okay, letâs talk just about the performance we measure - it wasnât clear to me thatâs what you mean from what you wrote. Natural language is inherently imprecise, so no bitterness intended, but in particular thatâs how I read the section outside of the spoiler tag.
By some measures, it can do quite a bit of novel logic. I recall it drawing a unicorn using text commends in one published test, for example, which correctly had a horn, body and four legs. That requires combining concepts in a way that almost certainly isnât directly in the training data, so itâs fair to say itâs not a mere search engine. Then again, sometimes it just doesnât do what itâs asked, for example when adding two numbers - it will give a plausible looking result, but thatâs all.
So, we have a blackbox, and weâre trying to decide if it could become an existential threat. Do we agree a computer just as smart as us probably would be? If so, that reduces to whether the blackbox could be just as smart as us eventually. Up until now, thereâs been great reasons to say no, even about blackbox software. I know clippy could never have done it, because thereâs forms of reasoning classical algorithms just couldnât do, despite great effort - it doesnât matter if clippy is closed source, because it was a classical algorithm.
On the other hand, what neural nets canât do is a total unknown. GPT-n wonât add numbers directly, but it is able to correctly preform the steps, which you can show by putting it in a chain-of-thought framework. It just âchoosesâ not to, because thatâs not how it was trained. GPT-n canât organise a faction that threatens human autonomy, but we donât know if thatâs because it doesnât know the steps, or because of the lack of memory and cost function to make it do that.
Itâs a blackbox, thereâs no known limits on what it could do, and itâs certain to be improved on quickly at least in some way. For this reason, I think it might become an existential threat, in some future iteration.
I also apologise for the tone. That was a knee-jerk reaction from my part; my bad.
(In my own defence, Iâve been discussing this topic with tech bros, and they rather consistently invert the burden of the proof. Often to evoke Brandoliniâs Law. You probably know which âtypesâ Iâm talking about.)
On-topic. Given that âsmartâ is still an internal attribute of the blackbox, perhaps we could gauge better if those models are likely to become an existential threat by 1) what they output now, 2) what they might output in the future, and 3) what we [people] might do with it.
Itâs also easier to work with your example productively this way. Hereâs a counterpoint:
<img alt="" src="https://mander.xyz/pictrs/image/ec716118-080c-455f-a282-182561703b4f.png">
<img alt="" src="https://mander.xyz/pictrs/image/95657c35-a245-4a4e-a931-e20ebcf63964.png"> <img alt="" src="https://mander.xyz/pictrs/image/f17302e6-8f3e-4ed9-86cd-3f2f2cb2b34a.png"> <img alt="" src="https://mander.xyz/pictrs/image/f34cb5f5-dc9d-4f60-8b0c-bac56af91e2c.png"> <img alt="" src="https://mander.xyz/pictrs/image/f5e9e42e-b218-4d52-a060-bbb03be8111d.png">
The prompt asks for eight legs, and only one pic was able to output it correctly; two ignored it, and one of the pics shows ten legs. Thatâs 25% accuracy.
I believe that the key difference between âyourâ unicorn and âmyâ eight-legged dragon is in the training data. Unicorns are fictitious but common in popular culture, so there are lots of unicorn pictures to feed the model with; while eight-legged dragons are something that I made up, so thereâs no direct reference, even if you could logically combine other references (as a spider + a dragon).
So their output is strongly limited by the training data, and it doesnât seem to follow some strong logic. What they might output in the future depends on what we add in; the potential for decision taking is rather weak, as they wouldnât be able to deal with unpredictable situations. And thus their ability to go rogue.
[Note: I repeated the test with a horse instead of a dragon, within the same chat. The output was slightly less bad, confirming my hypothesis - because pics of eight-legged horses exist due to the Sleipnir.]
Neural networks are a different can of worms for me, as I think that theyâll outlive LLMs by a huge margin, even if the current LLMs use them. However, how theyâll be used is likely considerably different.
For example, current state-of-art LLMs are coded with some âsemanticâ supplementation near the embedding, added almost like an afterthought. However, semantics should play a central role in the design of the transformer - because what matters is not the word itself, but what it conveys.
That would be considerably closer to a general intelligence than to modern LLMs - because youâre effectively demoting language processing to input/output, that might as well be subbed with something else, like pictures. In this situation I believe that the output would be far more accurate, and it could theoretically handle novel situations better. Then we could have some concerns about AI being an existential threat - because people would use this AI for decision taking, and it might output decisions that go terribly right, as in that âpaperclip factoryâ thought experiment.
The fact that we donât see developments in this direction yet shows, for me, that itâs easier said than done, and weâre really far from that.
To be clear, I wasnât talking about an actual picture generating model. It was raw GPT trained on just text, asked to write instructions for a paint program to output a unicorn. Thatâs more convincing because itâs multiple steps away from the basic task it was trained on. Here, I found the paper, it starts with unicorns and then starts exploring other images, and eventually they delve into way more detail than I actually read. Thereâs a video talk that goes with it.
The trick with trying to âmakeâ an AI do semantics, is that we donât know what semantics is, exactly. I mean, thatâs kind of what we started out with (remember the old pattern-matching chatbots?) but simpler approaches often worked better. Even the Transformer block itself is barely more complicated than a plain feed-forward network. I donât think thatâs so much because neural nets are more efficient (they really arenât) but because we were looking for an answer to a question we didnât have.
I think the challenge going forwards is freeing all that know-how from the black box weâve put it in, somehow. Assuming we do want to mess with something so dangerous if handled carelessly.
I think any prediction based on a âsingularityâ neglects to consider the physical limitations, and just how long the journey towards significant amounts of AGI would be.
The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.
If we consider a current GPU, e.g. the 12 GB GFX 3060, it can hold about 24 billion parameters at 4 bit quantisation (in reality a fair few less), and uses 180 W of power. So that means an AGI might use 750 kW of power to operate. A super-intelligent machine might use more. That is a farm of 2500 300W solar panels, while the sun is shining, just for the equivalent of one person.
Now to pose a real threat against the billions of humans, youâd need more than one personâs worth of intelligence. Maybe an army equivalent to 1,000 people, powered by 8,333,333 GPUs and 2,500,000 solar panels.
That is not going to materialise out of the air too quickly.
In practice, as we get closer to an AGI or ASI, there will be multiple separate deployments of similar sizes (within an order of magnitude), and they wonât be aligned to each other - some systems will be adversaries of any system executing a plan to destroy humanity, and will be aligned to protect against harm (AI technologies are already widely used for threat analysis). So youâd have a bunch of malicious systems, and a bunch of defender systems, going head to head.
The real AI risks, which I think many of the people ranting about singularities want to obscure, are:
I think youâre right on the money when it comes to the real dangers, especially your first bullet point. I donât necessarily agree with your napkin maths. If the virtual neurons are used in a more efficient way, that could make up for a lot versus human neuron count.
Let me guess⌠USA is defender and Russia/China is malicious? Seriously though who is going to be running the malicious machines trying to âdestroy humanityâ? If youâre talking about capitalism destroying the planet, this has already been happening without AI. Otherwise this seems like just another singularity fantasy.
The fears people who like to talk about the singularity like to propose is that there will be one ârogueâ misaligned ASI that progressively takes over everything - i.e. all the AI in the world works against all the people.
My point is that more likely is there will be lots of ASI or AGI systems, not aligned to each other, most on the side of the humans.
Yeah, but a lot of those do things unrelated to higher reasoning. A small monkey is smarter than a moose, despite the moose obviously having way more synapses.
I donât think you can rely on this kind of argument so heavily. A brain isnât a muscle.
This is something I think needs to be interrogated. None of these models, even the supposedly open ones are actually âopenâ or even currently âopenableâ. We can know the exact weights for every single parameter, the code used to construct it, and the data used to train it, and that information gives us basically no insight into its behavior. We simply donât have the tools to actually âreadâ a machine learning model in the way you would an open source program, the tech produces black boxes as a consequence of its structure. We can learn about how they work, for sure, but the corps making these things arenât that far ahead of the public when it comes to understanding what theyâre doing or how to change their behavior.
I think when people think of the danger of AI, they think of something like Skynet or the Matrix. It either hijacks technology or builds it itself and destroys everything.
But what seems much more likely, given what weâve seen already, is corporations pushing AI that they know isnât really capable of what they say it is and everyone going along with it because of money and technological ignorance.
You can already see the warning signs. Cars that run pedestrians over, search engines that tell people to eat glue, customer support AI that have no idea what theyâre talking about, endless fake reviews and articles. Itâs already hurt people, but so far only on a small scale.
But the profitablity of pushing AI early, especially if youâre just pumping and dumping a company for quarterly profits, is massive. The more that gets normalized, the greater the chance one of them gets put in charge of something important, or becomes a barrier to something important.
Thatâs whatâs scary about it. It isnât AI itself, itâs AI as a vector for corporate recklessness.
@millie @floofloof this is so well articulated I can't stand it. I want to have it printed out and hand it to anyone who asks me anything about AI. Thank you for this!
Itâs not tho.
Yes. We need human responsibility for everything what AI does. Itâs not the technology that harms but human beings and those who profit from it.
I donât think your assumption holds. Corporations are not, as a rule, incompetent - in fact, they tend to be really competent at squeezing profit out of anything. They are misaligned, which is much more dangerous.
I think the more likely scenario is also more grim:
AI actually does continue to advance and gets better and better displacing more and more jobs. It doesnât happen instantly so barely anything gets done. Some half-assed regulations are attempted but predictably end up either not doing anything, postponing the inevitable by a small amount of time, or causing more damage than doing nothing would. Corporations grow in power, build their own autonomous armies, and exert pressure on governments to leave them unregulated. Eventually all resources are managed by and for few rich assholes, while the rest of the world tries to survive without angering them.
If weâre unlucky, some of those corporations end up being managed by a maximizer AGI with no human supervision and then the Earth pretty much becomes an abstract game with a scoreboard, where money (or whatever is the equivalent) is the score.
Limitations of human body act as an important balancing factor in keeping democracies from collapsing. No human can rule a nation alone - they need armies and workers. Intellectual work is especially important (unless you have some other source of income to outsource it), but it requires good living conditions to develop and sustain. Once intellectual work is automated, infrastructure like schools, roads, hospitals, housing cease to be important for the rulers - they can give those to the army as a reward and make the rest of the population do manual work. Then if manual work and policing through force become automated, there is no need even for those slivers of decency.
Once a single human can rule a nation, there is enough rich psychopaths for one of them to attempt it.
There are also other AI-related pitfalls that humanity may fall into in the meantime - automated terrorism (e.g. swarms of autonomous small drones with explosive charges using face recognition to target entire ideologies by tracking social media), misaligned AGI going rogue (e.g. the famous paperclip maximizer, although probably not exactly this scenario), collapse of the internet due to propaganda bots using next-gen generative AI⌠Iâm sure thereâs more.
Ai doesnât get better. Its completely dependent on computing power. They are dumping all the power into it they can, and it sucks ass. The larger the dataset the more power it takes to search it all. Your imagination is infinite, computing power is not. you canât keep throwing electricity at a problem. It was pushed out because there was a bunch of excess computing power after crypto crashed, or semi stabilized. Its an excuse to lay off a bunch of workers after covid who were gonna get laid off anyway. Managers were like sweet Iâll trim some excess employees and replace them with ai! Wrong. Its a grift. It might hang on for a while but policy experts are already looking at the amount of resources being thrown at it and getting weary. The technological ignorance you are responding to, thatâs you. You donât know how the economy works and you donât know how ai works so youâre just believing all this rokuâs basilisk nonsense out of an overactive imagination. Its not an insult lots of people are falling for it, ai companies are straight up lying, the media is stretching the truth of it to the point of breaking. But Iâm telling you, donât be a sucker. Until thereâs a breakthrough that fixes the resource consumption issue by like orders of magnitude, I wouldnât worry too much about Ellisonâs AM becoming a reality
I find it rather disingenuous to summarize the previous posterâs comment as a âRokoâs basiliskâscenario. Intentionally picking a ridiculous argument to characterize the other side of the debate. I think they were pretty clear about actual threats (some more plausible than others, IMO).
I also find it interesting that you so confidently state that âAI doesnât get better,â under the assumption that our current deep learning architectures are the only way to build AI systems.
Iâm going to make a pretty bold statement: AGI is inevitable, assuming human technological advancement isnât halted altogether. Why can I so confidently state this? Because we already have GI without the A. To say that it is impossible is to me equivalent to arguing that there is something magical about the human brain that technology could never replicate. But brains arenât magic; theyâre incredibly sophisticated electrochemical machines. It is only a matter of time before we find a way to replicate âgeneral intelligence,â whether itâs through new algorithms, new computing architectures, or even synthetic biology.
I wasnât debating you. I have debates all day with people who actually know what theyâre talking about, I donât come to the internet for that. I was just looking out for you, and anyone else who might fall for this. There is a hard physical limit. Iâm not saying the things youâre describing are technically impossible, Iâm saying they are technically impossible with this version of the tech. Slapping a predictive text generator on a giant database , its too expensive, and it doesnât work. Its not a debate, its science. And not the fake shit run by corporate interests, the real thing based on math.
Thereâs gonna be a heatwave this week in the Western US, and there are almost constant deadly heatwaves in many parts of the world from burning fossil fuels. But we canât stop producing electricity to run these scam machines because someone might lose money.
Your opening sentence is demonstrably false. GTP-2 was a shitpost generator, while GPT-4 output is hard to distinguish from a genuine human. Dall-E 3 is better than its predecessors at pretty much everything. Yes, generative AI right now is getting better mostly by feeding it more training data and making it bigger. But it keeps getting better and thereâs no cutoff in sight.
That you can straight-up comment âAI doesnât get betterâ at a tech literate sub and not be called out is honestly staggering.
I actually donât think it is because, as I alluded to in another comment in this thread, so many people are still completely in the dark on generative AI - even in general technology-themed areas of the internet. Their only understanding of it comes from reading the comments of morons (because none of these people ever actually read the linked article) who regurgitate the same old âbig tech is only about hype, techbros are all charlatans from the capitalist eliteâ lines for karma/retweets/likes without ever actually taking the time to hear what people working within the field (i.e. experts) are saying. People underestimate the capabilities of AI because it fits their political world view, and in doing so are sitting ducks when it comes to the very real threats it poses.
The difference between gpt-3 and gpt-4 is number of parameters, I.e. processing power. I donât know what the difference between 2 and 4 is, maybe there were some algorithmic improvements. At this point, I donât know what algorithmic improvements are going to net efficiencies in the âorders of magnitudeâ that would be necessary to yield the kind of results to see noticeable improvement in the technology. Like the difference between 3 and 4 is millions of parameters vs billions of parameters. Is a chatgpt 5 going to have trillions of parameters? No.
Tech literate people are apparently just as susceptible to this grift, maybe more susceptible from what little I understand about behavioral economics. You can poke holes in my argument all you want, this isnât a research paper.
This. 1000% this. Many of Issac Asimov novels warned about this sort of thing too; as did any number of novels inspired by Asimov.
Itâs not that we didnât provide the AI with rules. Itâs not that the AI isnât trying not to harm people. Itâs that humans, being the clever little things we are, are far more adept at deceiving and tricking AI into saying things and using that to justify actions to gain benefit.
âŚUnderstandably this is how that is being done. By selling AI that isnât as intelligent as it is being trumpeted as. As long as these corporate shysters can organize a team to crap out a âMinimally Viable Productâ theyâre hailed as miracle workers and get paid fucking millions.
Ideally all of this should violate the many, many laws of many, many civilized nationsâŚbut theyâve done some black magic with that too; by attacking and weakening laws and institutions that can hold them liable for this and even completely ripping out or neutering laws that could cause them to be held accountable by misusing their influence.
Yes, itâs very concerning and frustrating that more people donât understand the risks posed by AI. Itâs not about AI becoming sentient and destroying humanity, itâs about humanity using AI to destroy itself. I think this fundamental misunderstanding of the problem is the reason why you get so many of these dismissive âAI is just techbro hypeâ comments. So many people are genuinely clueless about a) how manipulative this technology already is and b) the rate at which it is advancing.
Calling LLMs, âAIâ is one of the most genius marketing moves I have ever seen. Itâs also the reason for the problems you mention.
I am guessing that a lot of people are just thinking, âWell AI is just not that smart⌠yet! It will learn more and get smarter and then, ah ha! Skynet!â It is a fundamental misunderstanding of what LLMs are doing. It may be a partial emulation of intelligence. Like humans, it uses its prior memory and experiences (data) to guess what an answer to a new question would look like. But unlike human intelligence, it doesnât have any idea what it is saying, actually means.
He was interviewed after his septum replacement surgery, got a brand new teflon one