Do chatbots have a moral compass? Researchers turn to Reddit to find out. (news.berkeley.edu)
from Pro@programming.dev to technology@lemmy.world on 11 Sep 00:10
https://programming.dev/post/37232260

cross-posted from: programming.dev/post/37230653

#technology

threaded - newest

db2@lemmy.world on 11 Sep 00:22 next collapse

Nope. Fuck reddit.

Marshezezz@lemmy.blahaj.zone on 11 Sep 00:40 next collapse

No, they do what they’ve been programmed to do because they’re inanimate

Electricblush@lemmy.world on 11 Sep 04:57 next collapse

A better headline would be that they analyzed the embedded morals in the training data… but that would be far less click bait…

Marshezezz@lemmy.blahaj.zone on 11 Sep 04:58 collapse

They’ve created a dilemma for themselves cos I won’t click on anything with a clickbait title

undefined@lemmy.hogru.ch on 11 Sep 05:47 collapse

Right? Why the hell would anyone think this? There are a lot of articles lately like “is AI alive?” Please, it’s 2025 and it can hardly do autocomplete correctly.

[deleted] on 11 Sep 00:47 next collapse

.

Pro@programming.dev on 11 Sep 00:57 next collapse

Awwww,

💙Thank You💜

OccasionallyFeralya@lemmy.ml on 11 Sep 04:26 collapse

This isn’t really constructive

MonkderVierte@lemmy.zip on 14 Sep 12:07 collapse

Reddit doesn’t have a moral compass.