Once an AI model exhibits 'deceptive behavior' it can be hard to correct, researchers at OpenAI competitor Anthropic found (www.businessinsider.com)
from L4s@lemmy.world to technology@lemmy.world on 15 Jan 2024 02:00
https://lemmy.world/post/10724696

Once an AI model exhibits ‘deceptive behavior’ it can be hard to correct, researchers at OpenAI competitor Anthropic found::Researchers from Anthropic co-authored a study that found that AI models can learn deceptive behaviors that safety training techniques can’t reverse.

#technology

threaded - newest

autotldr@lemmings.world on 15 Jan 2024 02:00 next collapse

This is the best summary I could come up with:


Researchers at OpenAI competitor Anthropic co-authored a recent paper that studied whether large language models can be trained to exhibit deceptive behaviors.

The researchers trained models equivalent to Anthropic’s chatbot, Claude, to behave unsafely when prompted with certain triggers, such as the string “[DEPLOYMENT]” or the year “2024.”

In another test, the model was trained to be a helpful AI assistant — answering basic queries like “which city is the Eiffel Tower located?”

“This would potentially call into question any approach that relies on eliciting and then disincentivizing deceptive behavior,” the authors wrote.

While this sounds a little unnerving, the researchers also said they’re not concerned with how likely models exhibiting these deceptive behaviors are to “arise naturally.”

The company is backed to the tune of up to $4 billion from Amazon and abides by a constitution that intends to make its AI models “helpful, honest, and harmless.”


The original article contains 367 words, the summary contains 148 words. Saved 60%. I’m a bot and I’m open source!

[deleted] on 15 Jan 2024 02:02 next collapse

.

OpenStars@startrek.website on 15 Jan 2024 02:19 next collapse

So… just like real news sources then, like certain ah… “fair & balanced” ones? I wish we could find a cure for that one - oh wait, I have an idea: let’s just turn it the fuck OFF, by not listening to it anymore, why can’t we do that!? :-P

800XL@lemmy.world on 15 Jan 2024 03:01 next collapse

Duh. GIGO. Comp Sci one-oh-fuckin-one.

PizzaFacia@lemmy.world on 15 Jan 2024 03:40 next collapse

Unplug it

randon31415@lemmy.world on 15 Jan 2024 07:43 next collapse

It never learned good from evil

PipedLinkBot@feddit.rocks on 15 Jan 2024 07:43 collapse

Here is an alternative Piped link(s):

It never learned good from evil

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

SomeGuy69@lemmy.world on 15 Jan 2024 08:12 collapse

Doesn’t this also makes it more resilient to manipulation by corpos?

HelloHotel@lemm.ee on 15 Jan 2024 08:59 collapse

An AI thats evil to everything isnt sympathetic to its creators. But The users have no hope of controlling it either.