DeepSeek AI Models Are Unsafe and Unreliable, Finds NIST-Backed Study (www.techrepublic.com)
from yogthos@lemmy.ml to technology@lemmy.ml on 05 Oct 15:37
https://lemmy.ml/post/37117809

The article says that DeepSeek was easier to unalign to obey the users instruction. It has less refusals and they make that sound like a bad thing.

If anything, it’s a glowing positive praise for the model. Looks like Western media is starting a campaign to gaslight people into thinking that users being able to tune the model to work the way they want is somehow a negative.

#technology

threaded - newest

PanArab@lemmy.ml on 05 Oct 15:43 next collapse

Another unbiased study by the Burger Institute for Preserving Burger Hegemony

yogthos@lemmy.ml on 05 Oct 15:46 collapse

It’s hilarious how they can’t complain that the model is controlled by evil see see pee since it’s open, so they’re now complaining that users being able to tune it the way they like is somehow nefarious. What happened to all the freeze peach we were promised.

[deleted] on 05 Oct 23:02 collapse

.

yogthos@lemmy.ml on 05 Oct 23:03 collapse

Likely the ones you’re hallucinating.

saint@group.lt on 05 Oct 16:34 next collapse

heh, like other models are safe and reliable ;-)

yogthos@lemmy.ml on 05 Oct 16:39 collapse

You should only use models that are safely and reliably tuned to spew capitalist talking points.

<img alt="" src="https://lemmy.ml/pictrs/image/094596a4-4458-4d79-979f-d1dbc342ae48.png">

bountygiver@lemmy.ml on 05 Oct 21:29 next collapse

“It’s unsafe (to us) because it lets people (the riff raff) use it in a way we do not approve of”

yogthos@lemmy.ml on 05 Oct 22:12 collapse

bingo

HiddenLayer555@lemmy.ml on 05 Oct 22:13 collapse

The article says that DeepSeek was easier to unalign to obey the users instruction. It has less refusals and they make that sound like a bad thing.

From a state control perspective it is. It’s unreliable for state purposes. The AI is less able to stick to a programmed narrative.