Zuckerberg says Meta's Llama 3 is really good but no chatbot is sophisticated enough to be an 'existential' threat — yet (www.businessinsider.com)
from Dragxito@lemmy.world to technology@lemmy.world on 19 Apr 2024 03:53
https://lemmy.world/post/14454169

#technology

threaded - newest

autotldr@lemmings.world on 19 Apr 2024 03:55 next collapse

This is the best summary I could come up with:


Meta launched the latest iteration of its AI chatbot on Thursday with Llama 3, and CEO Mark Zuckerberg says it’s supposed to be really good.

The new model boasts “state-of-the-art” performance on various industry-standard benchmarks and comes with “improved reasoning,” according to a company blog post.

“In terms of all of the concerns around the more existential risks, I don’t think that anything at the level of what we or others in the field are working on in the next year is really in the ballpark of those types of risks,” he told the publication.

It’s one reason Zuckerberg feels that the company can continue making Llama open-source or available for the public or researchers to tinker with.

If Meta’s model achieves multimodality — meaning the ability to deliver results in various forms of media, including text, images, and video — then that may be a case when the company won’t want to make all aspects of its model open-source, Zuckerberg said.

"For example, image generation is one that we’re looking at closely Especially in an election year, is that a net positive thing to do?


The original article contains 314 words, the summary contains 186 words. Saved 41%. I’m a bot and I’m open source!

HootinNHollerin@lemmy.world on 19 Apr 2024 04:19 next collapse

Oh to punch that face

EdibleFriend@lemmy.world on 19 Apr 2024 05:00 collapse

He’s, surprisingly, a fucking fighter.

HootinNHollerin@lemmy.world on 19 Apr 2024 05:01 next collapse

Throw a brick then

EdibleFriend@lemmy.world on 19 Apr 2024 05:04 collapse

<img alt="" src="https://media1.tenor.com/m/9S8TpHnvtqkAAAAC/arnold-schwarzenegger-thats-better.gif">

tacotroubles@lemmy.world on 19 Apr 2024 06:18 collapse

Hes got that lizard robot strength

not_that_guy05@lemmy.world on 19 Apr 2024 04:23 next collapse

Why would we listen to a robot in human skin?

GasMaskedLunatic@lemmy.dbzer0.com on 19 Apr 2024 04:31 next collapse

That’s exactly what a chatbot sophisticated enough to be an existential threat would say.

lemmyvore@feddit.nl on 19 Apr 2024 06:52 collapse

Well, he would know.

Sneptaur@pawb.social on 19 Apr 2024 04:32 next collapse

He’s right you know

Hugh_Jeggs@lemm.ee on 19 Apr 2024 05:19 next collapse

The thumbnail looks like a chatbot trying to wave convincingly

kautau@lemmy.world on 19 Apr 2024 06:53 next collapse

With a mullet lol

bobburger@fedia.io on 19 Apr 2024 10:43 collapse

A chatbot with a sick mullet

Pacmanlives@lemmy.world on 19 Apr 2024 06:15 next collapse

Picture make it look like Zucc mullet

tourist@lemmy.world on 19 Apr 2024 08:12 next collapse

highly advanced diffusion model for the zullet

iamanurd@midwest.social on 19 Apr 2024 14:47 collapse

This is what I came for! Party in the zuccerback!

dtrain@lemmy.world on 19 Apr 2024 07:29 next collapse

The thumbnail shows Mark Mulletburg.

noodlejetski@lemm.ee on 19 Apr 2024 10:20 next collapse

also: www.404media.co/facebooks-ai-told-parents-group-i…

istanbullu@lemmy.ml on 19 Apr 2024 11:03 next collapse

I don’t buy into this “AI is dangerous” hype. Humans are dangerous.

funkless_eck@sh.itjust.works on 19 Apr 2024 12:30 next collapse

“ooh it’s more advanced but don’t worry- it’s not conscious”

is as much a marketing tactic as “how it feels to chew 5 gum” or buzzfeedesque “top 10 celebrity mistakes - number 3 will blow your mind”

it’s a tech product that runs a series of complicated loops against a large series of texts and returns the closest comparison, as it stands it’s never going to be dangerous in and of itself.

Thorny_Insight@lemm.ee on 19 Apr 2024 14:29 next collapse

Generative AI and LLMs is not what people mean when they’re talking about the dangers of AI. What we worry about doesn’t exist yet.

hikaru755@feddit.de on 19 Apr 2024 15:10 next collapse

I mean… It might be. Just depends on how much potential there still is to get models up to higher reasoning capabilities, and I don’t think anyone really knows that yet

Thorny_Insight@lemm.ee on 19 Apr 2024 15:20 collapse

Yeah maybe. I just personally don’t think LLMs are actually intelligent. They’re just capable of faking intelligence but at the same time making errors that perfectly indicate that it’s basically just bluffing. I’d be more worried about an AI that knows less things but demonstrates higer capability for logic and reasoning.

funkless_eck@sh.itjust.works on 19 Apr 2024 16:43 collapse

I dont think AI sentience as danger is going to be an issue in our lifetimes - we’re 123 years in January since the first well known story featuring this trope (Karel Čapek’s Rossumovi Univerzáiní Robotī)

We are a long way off from being able to copy virtual perception, action and unified agency of even basic organisms right now.

Therefore all claims about the “dangers” of AI are only dangers of humans using the tool (akin to the dangers of driving a car vs the dangers of cars attacking their owners without human interaction) and thus are just marketing hyperbole

in my opinion of course

Thorny_Insight@lemm.ee on 19 Apr 2024 18:20 collapse

Well yeah perhaps, but isn’t that kind of like knowing that an asteroid is heading towards earth and feeling no urgency about it? There’s non-zero chance that we’ll create AGI withing the next couple years. The chances may be low but consequences have the potential to literally end humanity - or worse.

funkless_eck@sh.itjust.works on 19 Apr 2024 19:49 collapse

“non zero” isnt exactly convincing, to me. there is also a non-zero chance God exists.

kromem@lemmy.world on 20 Apr 2024 21:58 collapse

it’s a tech product that runs a series of complicated loops against a large series of texts and returns the closest comparison, as it stands it’s never going to be dangerous in and of itself.

That’s not how it works. I really don’t get what’s with people these days being so willing to be confidently incorrect. It’s like after the pandemic people just decided that if everyone else was spewing BS from their “gut feelings,” well gosh darnit they could too!

It uses gradient descent on a large series of texts to build a neural network capable of predicting those texts as accurately as possible.

How that network actually operates ends up a black box, especially for larger models.

But research over the past year and a half in simpler toy models has found that there’s a rather extensive degree of abstraction. For example, a small GPT trained only on legal Othello or Chess moves ends up building a virtual representation of the board and tracks “my pieces” and “opponent pieces” on it, despite never being fed anything that directly describes the board or the concept of ‘mine’ vs ‘other’. In fact, in the Chess model, the research found there was even a single vector in the neural network that could be flipped to have the model play well or play like shit regardless of the surrounding moves fed in.

It’s fairly different from what you seem to think it is. Though I suspect that’s not going to matter to you in the least, as I’ve come to find that explaining transformers to people spouting misinformation about them online has about the same result as a few years ago explaining vaccine research to people spouting misinformation about that.

funkless_eck@sh.itjust.works on 20 Apr 2024 22:36 collapse

I dont know if saying “it’s not a loop! it’s an iterative process using a series of steps!” is that much of a burn.

my dude, that’s a loop.

Chakravanti@sh.itjust.works on 21 Apr 2024 06:43 collapse

Well He That Remains came by just to show that everything we experience is always part of a bigger loop. You can fucking kill him and even slam the break; crash to his design of the the highest number of alternate dimensions and then some and it won’t stop the loop. 99.99% of the time he’ll be back. We only need to consciously accept the concept of no more than the notion to summon his return. Even if we were to successfully crack the time management mech and undo his manipulation, he’ll be back when we track him down to build another one.

The Loop is more nature than matter to energy combined. When everything in all of reality would expand infinitely far apart, the whole shebang goes lateral mirror again with a whole new dimension. There is no end to any aspect of reality. Anywhere it would be, turns out it’s “just” “another” Loop Mirror.

Thorny_Insight@lemm.ee on 19 Apr 2024 14:27 next collapse

AI can be dangerous. The point is not that it’s likely but that in the very unlikely event of it going rogue it can at worst have civilication ending consequences.

Imagine how easy it is to trick a child as an adult. The difference in intelligence between a human and superintelligent AGI would be orders of magnitude greater that that.

conciselyverbose@sh.itjust.works on 20 Apr 2024 19:49 collapse

An actual AI (that modern tools don’t even vaguely resemble) could maybe theoretically be dangerous.

An LLM cannot be dangerous. There’s no path to anything resembling intelligence or agency.

kromem@lemmy.world on 20 Apr 2024 22:04 collapse

Exactly. People try to scare into regulatory capture talking about paperclip maximizers when meanwhile it’s humans and our corporations that are literally making excess shit to the point of human extinction.

To say nothing for how often theorizing around ‘superintelligence’ imagines the stupidest tendencies of humanity being passed on to it while denying our smartest tendencies as “uniquely human” despite existing models largely already rejecting the projected features and modeling the ‘unique’ ones like empathy.

JoeKrogan@lemmy.world on 19 Apr 2024 11:40 next collapse

Says the android. He’s like that alien in men in black pretending to be a human.

65gmexl3@lemmy.world on 19 Apr 2024 15:09 next collapse

Unlike lemmy comments always hating zuck, its refreshing to see comments like this being grateful with meta

news.ycombinator.com/item?id=40078796

xcjs@programming.dev on 19 Apr 2024 19:00 collapse

I was reflecting on this myself the other day. For all my criticisms of Zuckerberg/Meta (which are very valid), they really didn’t have to release anything concerning LLaMA. They’re practically the only reason we have viable open source weights/models and an engine.

BetaDoggo_@lemmy.world on 19 Apr 2024 18:57 next collapse

The 8B is incredible for it’s size and they’ve managed to do sane refusal training this time for the official instruct.

Ultragigagigantic@lemmy.world on 19 Apr 2024 20:01 next collapse

Damn, every new picture of Zuckerberg I see looks more like a fucking android

GladiusB@lemmy.world on 19 Apr 2024 20:30 next collapse

The thumbnail made me think he had a mullet

General_Effort@lemmy.world on 19 Apr 2024 21:38 collapse

Behind every successful man there is a woman, making him look like he has a mullet.

boatsnhos931@lemmy.world on 19 Apr 2024 21:03 next collapse

Synthetic

uriel238@lemmy.blahaj.zone on 20 Apr 2024 00:51 collapse

One day, he will be an android and we won’t notice.

All this time Android-Zuck will tell us were totally on the verge of real AGI that can dominate the world, but not yet.

kromem@lemmy.world on 20 Apr 2024 22:11 next collapse

It’s not as good as it seems at the surface.

It is a model squarely in the “fancy autocomplete” category along with GPT-3 and fails miserably at variations of logic puzzles in ways other contemporary models do not.

It seems that the larger training data set allows for better modeling around the fancy autocomplete parts, but even other similarly sized models like Mistral appear to have developed better underlying critical thinking capacities when you scratch below the surface that are absent here.

I don’t think it’s a coincidence that Meta’s lead AI researcher is one of the loudest voices criticizing the views around emergent capabilities. There seems to be a degree of self-fulfilling prophecy going on. A lot of useful learnings in the creation of Llama 3, but once other models (i.e. Mistral) also start using extended training my guess is that any apparent advantages to Llama 3 right now are going to go out the window.

rageagainstmachines@lemmy.world on 21 Apr 2024 18:54 collapse

So when the existential threat is made public, it’ll already be in the palm of his hand. Wonderful.