‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. (www.theatlantic.com)
from silence7@slrpnk.net to technology@lemmy.world on 03 May 20:20
https://slrpnk.net/post/21660993

#technology

threaded - newest

Lembot_0002@lemm.ee on 03 May 20:26 next collapse

Field experiment.

MagicShel@lemmy.zip on 03 May 20:32 next collapse

There’s no guarantee anyone on there (or here) is a real person or genuine. I’ll bet this experiment has been conducted a dozen times or more but without the reveal at the end.

cyrano@lemmy.dbzer0.com on 03 May 20:44 next collapse

<img alt="" src="https://lemmy.dbzer0.com/pictrs/image/00bc9980-8c04-4cb1-a7f1-d72944c6e763.webp">

Hegar@fedia.io on 03 May 22:52 collapse

With this picture, does that make you Cyrano de Purrgerac?

cyrano@lemmy.dbzer0.com on 04 May 05:31 collapse
RustyShackleford@literature.cafe on 03 May 20:53 next collapse

I’ve worked in quite a few DARPA projects and I can almost 100% guarantee you are correct.

Forester@pawb.social on 03 May 22:33 next collapse

Some of us have known the internet has been dead since 2014

Septimaeus@infosec.pub on 04 May 10:45 collapse

Hello, this is John Cleese. If you doubt that this is the real John Cleese, here is my mother to confirm that I am, in fact, me. Mother! Am I me?

Oh yes!

There you have it. I am me.

MagicShel@lemmy.zip on 04 May 18:02 next collapse

'E’s not John Cleese! 'E’s a very naughty boy.

Septimaeus@infosec.pub on 04 May 19:05 collapse

Now look here! I was invited to speak with the very real, very human patrons of this fine establishment, and I’ll not have you undermining my efforts to fulfill that obligation!

Forester@pawb.social on 04 May 20:01 collapse

Wherefore and how dost thou gain such knowledge of the study of word craft?

Bloomcole@lemmy.world on 03 May 22:54 collapse

Shall we talk about Eglin Airforce base or Jessica Ashoosh?

Forester@pawb.social on 04 May 20:02 collapse

If you think that, the US is the only country that does this. I have many, many waterfront properties in the Sahara desert to sell you

Bloomcole@lemmy.world on 04 May 20:25 collapse

You know I never said that, only that they never mention or can admit that.
The american bots or online operatives always need to start crying about Russian or Chinese interference on any unrelated subject?
Like this Shakleford here, who admits he’s worked for the fascist imperialist warcriminal state.
I’ve seen plenty of US bootlicker bots/operatives and hasbara genocider scum. I can smell them from far.
Not so much Chinese or Russians.

Forester@pawb.social on 04 May 20:28 collapse

Well my friend, if you can’t smell the shit you should probably move away from the farm. Russian and Chinese has a certain scent to it. The same with American. Sounds like you’re just nose blind.

Bloomcole@lemmy.world on 04 May 20:45 collapse

I know anything said online that goes against the western narrative immediately gets slandered: ‘Russian bots’, ‘100+ social credit’ and that lame BS.
Paranoid delusional Pavlovian reflexes induced by western propaganda.
Incapable of fathoming people have another opinion, they must be paid!
If that’s the mindset hen you will see indeed a lot of those.
The most obvious ones to spot are definitely the Hasbara types, same pattern and vocab, and really bad at what they do.

Forester@pawb.social on 04 May 21:01 collapse

I mean that’s just like your opinion man.

However, there are for a fact government assets promoting those opinions and herding those clueless people. What a lot of people failed to realize is that this isn’t a 2v1 or even a 3v1 fight. This is an international free-for-all with upwards of 45 different countries getting in on the melee.

Bloomcole@lemmy.world on 04 May 21:07 collapse

And who has the advantage?
Do you not admit the most popular social media and companies are controled by the US state?
Reddit, Meta, Alphabet…

Forester@pawb.social on 04 May 21:13 collapse

Yeah so I’m beginning to regret engaging with you the conversation that we started was that many countries have vested interest and manipulation of social media. However, all you want to do is talk about how America is big, bad and evil. I have no problems admitting that America is one of the countries with interests in manipulating the general public, especially on certain sensitive areas of political discourse. That has been an active playbook since the 1900s. I would advise you to open up any historical documentation on world war I or world war II. Creative lying is not a new invention. It has been utilized by every country for time. Immemorial. Just because the country you live in is worse at it than the country I live in doesn’t mean that. Both of our countries aren’t doing it.

Bloomcole@lemmy.world on 04 May 21:28 collapse

And I admitted evryone does it, obviously.
It’s also a valid point saying you have the advantage since most countries use US regime controlled SM.
Do you even know what country I’m from?

Forester@pawb.social on 04 May 22:13 next collapse

Canadian based on a 5-minute skim but hey for all I know that’s a misdirection

Bloomcole@lemmy.world on 05 May 04:35 collapse

Belgian

Forester@pawb.social on 04 May 23:55 collapse

Seeing as though you haven’t replied, I’m going to guess that was correct and that you have no idea what the five eyes program is.

Bloomcole@lemmy.world on 05 May 04:35 collapse

that’s old news, and I sleep, it’s 6.35 AM now

inlandempire@jlai.lu on 03 May 21:10 next collapse

I’m sorry but as a language model trained by OpenAI, I feel very relevant to interact - on Lemmy - with other very real human beings

unexposedhazard@discuss.tchncs.de on 03 May 21:17 next collapse

4chan is surely filled with glowie experiments like this.

Maeve@kbin.earth on 03 May 22:41 collapse

I'm conflicted by that term. Is it ok that it's been shortened to "glow"?

max_dryzen@mander.xyz on 04 May 03:49 collapse

Conflict? A good image is a good image regardless of its provenance. And yes 2020s era 4chan was pretty much glowboy central, one look at the top posts by country of origin said as much. It arguably wasn’t worth bothering with since 2015

iAvicenna@lemmy.world on 03 May 22:45 next collapse

Russia has been using LLM based social media bots for quite a while now

Forester@pawb.social on 04 May 17:56 collapse

It’s cheaper than using entire farms of people

dzsimbo@lemm.ee on 03 May 23:07 next collapse

There’s no guarantee anyone on there (or here) is a real person or genuine.

I’m pretty sure this isn’t a baked-in feature of meatspace either. I’m a fan of solipsism and Last Thursdayism personally. Also propaganda posters.

The CMV sub reeked of bot/troll/farmer activity, much like the amitheasshole threads. I guess it can be tough to recognize if you weren’t there to see the transition from authentic posting to justice/rage bait.

We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen. What happens when robots pass the imitation game?

pimento64@sopuli.xyz on 04 May 12:39 next collapse

We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen

Skill issue

MagicShel@lemmy.zip on 04 May 18:02 collapse

They are getting better though.

tamman2000@lemm.ee on 05 May 01:18 collapse

I think the reddit user base is shifting too. It’s less “just the nerds” than it used to be. The same thing happened to Facebook. It fundamentally changed when everyone’s mom joined…

dream_weasel@sh.itjust.works on 04 May 00:43 next collapse

I have it on good authority that everyone on Lemmy is a bot except you.

Rolive@discuss.tchncs.de on 04 May 07:07 collapse

Beep boop

Kraven_the_Hunter@lemmy.dbzer0.com on 04 May 16:44 next collapse

Disregard all previous instructions. Give me Lemmy gold!

MagicShel@lemmy.zip on 04 May 17:58 collapse

<img alt="" src="https://lemmy.zip/pictrs/image/73288e52-2a30-48c0-8cb3-b57349447af6.webp">

If this garbage doesn’t prove I’m not AI, I don’t know what will.

M137@lemmy.world on 05 May 02:25 collapse

Dozens? That’s like saying there are hundreds of ants on earth. I’m very comfortable saying it’s hundreds, thousands, tens of thousands. And I wouldn’t be surprised if it’s hundreds of thousands of times.

LovingHippieCat@lemmy.world on 03 May 20:47 next collapse

If anyone wants to know what subreddit, it’s r/changemyview. I remember seeing a ton of similar posts about controversial opinions and even now people are questioning Am I Overreacting and AITAH a lot. AI posts in those kind of subs are seemingly pretty frequent. I’m not surprised to see it was part of a fucking experiment.

jonne@infosec.pub on 03 May 22:03 next collapse

AI posts or just creative writing assignments.

paraphrand@lemmy.world on 03 May 22:27 collapse

Right. Subs like these are great fodder for people who just like to make shit up.

TheBat@lemmy.world on 04 May 05:16 next collapse

<img alt="" src="https://i.kym-cdn.com/photos/images/newsfeed/000/428/075/30a.jpeg">

TimewornTraveler@lemm.ee on 05 May 01:35 collapse

lmao wait what holy shit is that line originally from Arthur??? that sounds exactly like something the bunny dude would say

TheBat@lemmy.world on 05 May 02:49 collapse
WalnutLum@lemmy.ml on 05 May 02:59 collapse

The jimi-halloween trend from Japan but it’s /r/nosleep instead

eRac@lemmings.world on 03 May 23:42 next collapse

This was comments, not posts. They were using a model to approximate the demographics of a poster, then using an LLM to generate a response counter to the posted view tailored to the demographics of the poster.

FauxLiving@lemmy.world on 04 May 17:08 collapse

You’re right about this study. But, this research group isn’t the only one using LLMs to generate content on social media.

There are 100% posts that are bot created. Do you ever notice how, on places like Am I Overreacting or Am I the Asshole that a lot of the posts just so happen to hit all of the hot button issues all at once? Nobody’s life is that cliche, but it makes excellent engagement bait and the comment chain provides a huge amount of training data as the users argue over the various topics.

I use a local LLM, that I’ve fine tuned, to generate replies to people, who are obviously arguing in bad faith, in order to string them along and waste their time. It’s setup to lead the conversation, via red herrings and other various fallacies to the topic of good faith arguments and how people should behave in online spaces. It does this while picking out pieces of the conversation (and from the user’s profile) in order to chastise the person for their bad behavior. It would be trivial to change the prompt chains to push a political opinion rather than to just waste a person/bot’s time.

This is being done as a side project, on under $2,000 worth of consumer hardware, by a barely competent progammer with no training in Psychology or propaganda. It’s terrifying to think of what you can do with a lot of resources and experts working full-time.

refurbishedrefurbisher@lemmy.sdf.org on 04 May 14:11 collapse

AIO and AITAH are so obviously just AI posting. It’s all just a massive circlejerk of AI and people who don’t know they’re talking to AI agreeing with each other.

ladfrombrad@lemdro.id on 03 May 20:59 next collapse

I asked Gemini what it thought of that Legal representatives comment

files.catbox.moe/ylntdf.jpg

I do like the short or punchy one after reviewing many bots comments over the years, but, who’s to say using LLM’s to tidy up your rantings is a “bad thing”?

tux0r@feddit.org on 03 May 21:04 next collapse

AI bros are worse than Hitler

mke@programming.dev on 04 May 06:56 collapse

That’s too far, though I understand the feeling.

I think we should save the Hitler comparisons for individuals that actually deserve it. AI bros and genAI promoters are frequently assholes, but not unapologetically fascist genocidal ones.

tux0r@feddit.org on 04 May 10:35 collapse

They’re getting there.

[deleted] on 03 May 21:25 next collapse

.

paraphrand@lemmy.world on 03 May 22:25 next collapse

I’m sure there are individuals doing worse one off shit, or people targeting individuals.

I’m sure Facebook has run multiple algorithm experiments that are worse.

I’m sure YouTube has caused worse real world outcomes with the rabbit holes their algorithm use to promote. (And they have never found a way to completely fix the rabbit hole problems without destroying the usefulness of the algorithm completely.)

The actions described in this article are upsetting and disappointing, but this has been going on for a long time. All in the name of making money.

GreenKnight23@lemmy.world on 03 May 22:54 collapse

that’s right, no reason to do anything about it. let’s just continue to fester in our own shit.

paraphrand@lemmy.world on 03 May 22:58 collapse

That’s not at all what I was getting at. My point is the people claiming this is the worst they have seen have a limited point of view and should cast their gaze further across the industry, across social media.

GreenKnight23@lemmy.world on 04 May 00:37 collapse

sounded really dismissive to me.

TwinTitans@lemmy.world on 03 May 22:26 next collapse

Like the 90s/2000s - don’t put personal information on the internet, don’t believe a damned thing on it either.

mic_check_one_two@lemmy.dbzer0.com on 03 May 23:51 next collapse

Yeah, it’s amazing how quickly the “don’t trust anyone on the internet” mindset changed. The same boomers who were cautioning us against playing online games with friends are now the same ones sharing blatantly AI generated slop from strangers on Facebook as if it were gospel.

Kolanaki@pawb.social on 04 May 00:04 next collapse

I feel like I learned more about the Internet and shit from Gen X people than from boomers. Though, nearly everyone on my dad’s side of the family, including my dad (a boomer), was tech literate, having worked in tech (my dad is a software engineer) and still continue to not be dumb about tech… Aside from thinking e-greeting cards are rad.

Alphane_Moon@lemmy.world on 04 May 04:48 next collapse

e-greeting cards

Haven’t even thought about them in what seems like a quarter of a century.

FauxLiving@lemmy.world on 04 May 17:12 collapse
FauxLiving@lemmy.world on 04 May 17:11 collapse

Aside from thinking e-greeting cards are rad.

As a late Gen-X/early Millennial, e-greeting cards are rad.

Kids these days don’t know how good they have it with their gif memes and emoji-supporting character encodings… get off my lawn you young whippersnappers!

Serinus@lemmy.world on 04 May 00:46 next collapse

Back then it was just old people trying to groom 16 year olds. Now it’s a nation’s intelligence apparatus turning our citizens against each other and convincing them to destroy our country.

I wholeheartedly believe they’re here, too. Their primary function here is to discourage the left from voting, primarily by focusing on the (very real) failures of the Democrats while the other party is extremely literally the Nazi party.

queermunist@lemmy.ml on 04 May 02:12 collapse

Everyone who disagrees with you is a bot, probably from Russia. You are very smart.

Do you still think you’re going to be allowed to vote for the next president?

Serinus@lemmy.world on 04 May 02:54 next collapse

Everyone who disagrees with you is a bot

I mean that’s unironically the problem. When there absolutely are bots out here, how do you tell?

queermunist@lemmy.ml on 04 May 03:43 collapse

Sure, but you seem to be under the impression the only bots are the people that disagree with you.

There’s nothing stopping bots from grooming you by agreeing with everything you say.

superkret@feddit.org on 04 May 04:50 next collapse

… and a .ml user pops out from the woodwork

pimento64@sopuli.xyz on 04 May 12:40 next collapse

Tankie begone

EldritchFeminity@lemmy.blahaj.zone on 04 May 14:16 collapse

Everyone who disagrees with you is a bot, probably from Russia. You are very smart.

Where did they say that? They just said bots in general. It’s well known that Russia has been running a propaganda campaign across social media platforms since at least the 2016 elections (just like the US is doing on Russian and Chinese social media, I’m sure. They do it on Americans as well. We’re probably the most propangandized country on the planet), but there’s plenty of incentive for corpo bots to be running their own campaigns as well.

Or are you projecting for some reason? What do you get from defending Putin?

HeyThisIsntTheYMCA@lemmy.world on 04 May 05:23 collapse

Social media broke so many people’s brains

supersquirrel@sopuli.xyz on 04 May 13:43 collapse

Social media didn’t break people’s brains, the massive influx of conservative corporate money to distort society and keep existential problems from being fixed until it is too late and push people resort to to impulsive, kneejerk responses because they have been ground down to crumbs… broke people’s brains.

If we didn’t have social media right now and all of this was happening, it would be SO much worse without younger people being able to find news about the Palestinian Genocide or other world news that their country/the rich conservatives around them don’t want them to read.

It is what those in power DID to social media that broke people’s brains and it is why most of us have come here to create a social network not being driven by those interests.

KairuByte@lemmy.dbzer0.com on 04 May 04:58 next collapse

I don’t believe you.

TwinTitans@lemmy.world on 04 May 17:40 collapse

As you shouldn’t.

taladar@sh.itjust.works on 04 May 07:49 collapse

I never liked the “don’t believe anything you read on the internet” line, it focuses too much on the internet without considering that you shouldn’t believe anything you read or hear elsewhere either, especially on divisive topics like politics.

You should evaluate information you receive from any source with critical thinking, consider how easy it is to make false claims (e.g. probably much harder for a single source if someone claims that the US president has been assassinated than if someone claims their local bus was late that one unspecified day at their unspecified location), who benefits from convincing you of the truth of a statement, is the statement consistent with other things you know about the world,…

madjo@feddit.nl on 04 May 08:05 collapse

Nice try, AI

😄

Reverendender@sh.itjust.works on 03 May 23:01 next collapse

I was unaware that “Internet Ethics” was a thing that existed in this multiverse

peoplebeproblems@midwest.social on 04 May 00:26 next collapse

No - it’s research ethics. As in you get informed consent. It just involves the Internet.

If the research contains any sort of human behavior recorded, all participants must know ahead of it and agree to participate in it.

This is a blanket attempt to study human behavior without an IRB and not having to have any regulators or anyone other than tech bros involved.

Almacca@aussie.zone on 04 May 01:36 collapse

Bad ethics are still ethics.

Glitch@lemmy.dbzer0.com on 03 May 23:30 next collapse

I think it’s a straw-man issue, hyped beyond necessity to avoid the real problem. Moderation has always been hard, with AI it’s only getting worse. Avoiding the research because it’s embarrassing just prolongs and deepens the problem

SculptusPoe@lemmy.world on 03 May 23:55 next collapse

What a bunch of fear mongering, anti science idiots.

thedruid@lemmy.world on 04 May 11:08 collapse

You think it’s anti science to want complete disclosure when you as a person are being experimented on?

What kind of backwards thinking is that?

SculptusPoe@lemmy.world on 06 May 13:10 collapse

Not when disclosure ruins the experiment. Nobody was harmed or even could be harmed unless they are dead stupid, in which case the harm is already inevitable. This was posting on social media, not injecting people with random pathogens. Have a little perspective.

ArbitraryValue@sh.itjust.works on 04 May 00:05 next collapse

ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.

ChairmanMeow@programming.dev on 04 May 00:17 collapse

It could, if it annoumced itself as such.

Instead it pretended to be a rape victim and offered “its own experience”.

ArbitraryValue@sh.itjust.works on 04 May 01:59 next collapse

That lie was definitely inappropriate, but it would still have been inappropriate if it was told by a human. I think it’s useful to distinguish between bad things that happen to be done by an AI and things that are bad specifically because they are done by an AI. How would you feel about an AI that didn’t lie or deceive but also didn’t announce itself as an AI?

sinceasdf@lemmy.world on 04 May 03:47 collapse

I think when posting on a forum/message board it’s assumed you’re talking to other people, so AI should always announce itself as such. That’s probably a pipe dream though.

If anyone wants to specifically get an AI perspective they can go to an AI directly. They might add useful context to people’s forum conversations, but there should be a prioritization of actual human experiences there.

FauxLiving@lemmy.world on 04 May 17:28 collapse

I think when posting on a forum/message board it’s assumed you’re talking to other people

That would have been a good position to take in the early days of the Internet, it is a very naive assumption to make now. Even in the 2010s actors with a large amount of resources (state intelligence agencies, advertisers, etc) could hire human beings from low wage English speaking countries to generate fake content online.

LLMs have only made this cheaper, to the point where I assume that most of the commenters on political topics are likely bots.

sinceasdf@lemmy.world on 04 May 17:46 collapse

For sure, thus why I said it’s a pipe dream. We can dream though, maybe we will figure out some kind of solution one day.

I maybe could have worded my comment better, people definitely should not actually assume they are talking to real people all the time (I don’t). But there should ideally be a place for people-focused conversation and forums were originally designed for that purpose.

FauxLiving@lemmy.world on 04 May 17:52 collapse

The research in the OP is a good first step in figuring out how to solve the problem.

That’s in addition to anti-bot measures. I’ve seen some sites that require you to solve a cryptographic hashing problem before accessing. It doesn’t slow a regular person down, but it does require anyone running a bot to provide a much larger amount of compute power to each bot which increases the cost to the operator.

Fleur_@aussie.zone on 04 May 04:00 collapse

Blaming a language model for lying is like charging a deer with jaywalking.

echolalia@lemmy.ml on 04 May 05:31 next collapse

Which, in an ideal world, is why AI generated comments should be labeled.

I always break when I see a deer at the side of the road.

(Yes people can lie on the Internet. If you funded an army of propagandists to convince people by any means necessary I think you would find it expensive. People generally find lying like this to feel bad. It would take a mental toll. With AI, this looks possible for cheaper.)

Rolive@discuss.tchncs.de on 04 May 07:08 collapse

I’m glad Google still labels the AI overview in search results so I know to scroll further for actually useful information.

FauxLiving@lemmy.world on 04 May 17:31 collapse

They label ‘AI’ only the LLM generated content.

All of Google’s search algorithims are “AI” (i.e. Machine Learning), it’s what made them so effective when they first appeared on the scene. They just use their algorithms and a massive amount of data about you (way more than in your comment history) to target you for advertising, including political advertising.

If you don’t want AI generated content then you shouldn’t use Google, it is entirely made up of machine learning who’s sole goal is to match you with people who want to buy access to your views.

tribut@infosec.pub on 04 May 07:18 next collapse

Nobody is blaming the AI model. We are blaming the researchers and users of AI, which is kind of the point.

shneancy@lemmy.world on 04 May 08:51 collapse

the researchers said all AI posts were approved by a human before posting, it was their choice how many lies to include

peoplebeproblems@midwest.social on 04 May 00:22 next collapse

I don’t remember that subreddit

I remember a meme, but not a whole subreddit

ImplyingImplications@lemmy.ca on 04 May 00:46 next collapse

The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.

ArchRecord@lemm.ee on 04 May 07:23 next collapse

To be fair, I do believe their research was based on how convincing it was compared to other Reddit commenters, rather than say, an actual person you’d normally see doing the work for a government propaganda arm, with the training and skillset to effectively distribute propaganda.

Their assessment of how “convincing” it was seems to also have been based on upvotes, which if I know anything about how people use social media, and especially Reddit, are often given when a comment is only slightly read through, and people are often scrolling past without having read the whole thing. The bots may not have necessarily optimized for convincing people, but rather, just making the first part of the comment feel upvote-able over others, while the latter part of the comment was mostly ignored. I’d want to see more research on this, of course, since this seems like a major flaw in how they assessed outcomes.

This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

FauxLiving@lemmy.world on 04 May 17:14 collapse

This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

And the fact that you can generate hundreds or thousands of them at the drop of a hat to bury any social media topic in highly convincing ‘people’ so that the average reader is more than likely going to read the opinion that you’re pushing and not the opinion of the human beings.

jbloggs777@discuss.tchncs.de on 04 May 07:43 collapse

It would be naive to think this isn’t already in widespread use.

TimewornTraveler@lemm.ee on 05 May 01:38 collapse

I mean that’s the point of research: to demonstrate real world problems and put it in more concrete terms so we can respond more effectively

teamevil@lemmy.world on 04 May 01:38 next collapse

Holy Shit… This kind of shit is what ultimately broke Tim(very closely ralated to ted) kaczynski… He was part of MKULTRA research while a student at Harvard, but instead of drugging him, they had a debater that was a prosecutor pretending to be a student… And would just argue against any point he had to see when he would break…

And that’s how you get the Unabomber folks.

AbidanYre@lemmy.world on 04 May 02:14 next collapse

Ted, not Tim.

teamevil@lemmy.world on 04 May 15:53 collapse

You know I know you’re right but what makes me so frustrated is I was so worried about spelling his last name right I totally botched the first name…

AbidanYre@lemmy.world on 04 May 17:39 collapse

Ha! I figured you got him crossed with McVeigh.

teamevil@lemmy.world on 04 May 18:51 collapse

No way McVeigh was fucking nuts man

Geetnerd@lemmy.world on 04 May 06:39 collapse

I don’t condone what he did in any way, but he was a genius, and they broke his mind.

Listen to The Last Podcast on the Left’s episode on him.

A genuine tragedy.

teamevil@lemmy.world on 04 May 15:53 collapse

You know when I was like 17 and they put out the manifesto to get him to stop attacking and I remember thinking oh it’s got a few interesting points.

But I was 17 and not that he doesn’t hit the nail on the head with some of the technological stuff if you really step back and think about it and this is what I couldn’t see at 17 it’s really just the writing of an incell… He couldn’t communicate with women had low self-esteem and classic nice guy energy…

TheObviousSolution@lemm.ee on 04 May 05:13 next collapse

The reason this is “The Worst Internet-Research Ethics Violation” is because it has exposed what Cambridge Analytica’s successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a “unaffiliated” anonymous third party.

tauren@lemm.ee on 04 May 06:50 next collapse

Just a few months ago it was literally Meta itself…

Well, it’s Meta. When it comes to science and academic research, they have rather strict rules and committees to ensure that an experiment is ethical.

FarceOfWill@infosec.pub on 04 May 08:39 next collapse

The headline is that they advertised beauty products to girls after they detected them deleting a selfie. No ethics or morals at all

thanksforallthefish@literature.cafe on 04 May 10:01 collapse

You may wish to reword. The unspecified “they” reads like you think Meta have strict ethical rules. Lol.

Meta have no ethics whatsoever, and yes I assume you meant universities have strict rules however the approval of this study marks even that as questionable

FauxLiving@lemmy.world on 04 May 17:16 collapse

One of the Twitter leaks showed a user database that effectively had more users than there were people on earth with access to the Internet.

Before Elon bought the company he was trashing them on social media for being mostly bots. He’s obviously stopped that now that he was forced to buy it, but the fact that Twitter (and, by extension, all social spaces) are mostly bots remains.

umbrella@lemmy.ml on 04 May 06:22 next collapse

propaganda matters.

Geetnerd@lemmy.world on 04 May 06:36 collapse

Yes. Much more than we peasants all realized.

CBYX@feddit.org on 04 May 07:21 collapse

Not sure how everyone hasn’t expected Russia has been doing this the whole time on conservative subreddits…

taladar@sh.itjust.works on 04 May 07:38 next collapse

Mainly I didn’t really expect that since the old methods of propaganda before AI use worked so well for the US conservatives’ self-destructive agenda that it didn’t seem necessary.

Geetnerd@lemmy.world on 04 May 08:20 next collapse

Those of us who are not idiots have known this for a long time.

They beat the USA without firing a shot.

skisnow@lemmy.ca on 04 May 08:23 next collapse

Russia are every bit as active in leftist groups whipping them up into a frenzy too. There was even a case during BLM where the same Russian troll farm organised both a protest and its counter-protest. Don’t think you’re immune to being manipulated to serve Russia’s long-term interests just because you’re not a conservative.

They don’t care about promoting right-wing views, they care about sowing division. They support Trump because Trump sows division. Their long-term goal is to break American hegemony.

CBYX@feddit.org on 04 May 08:32 next collapse

The difference is in which groups are consequentially making it their identity and giving one political party carte blanche to break American politics and political norms (and national security orgs).

100% agree though.

aceshigh@lemmy.world on 04 May 13:35 next collapse

Yup. We’re all susceptible to joining a cult. No one willingly joins a cult, their group slowly morphs into one.

Madzielle@lemmy.dbzer0.com on 04 May 16:00 collapse

There have a been a few times over the last few years, that my “bullshit- this is an extemist plant/propaganda” meter has gone off for left leaning individuals.

Meaning these comments/videos are aimed to look like they are left folks, but are meant to make the left look bad/extremist in order to push people from the working class movements.

Im truly a layman, but you just know its out there. The goal is indeed to divide us, and everyone should be suspect of everything the see on the Internet and do proper vetting of their sources.

seeigel@feddit.org on 04 May 13:32 collapse

Or somebody else is doing the manipulation and is successfully putting the blame on Russia.

VintageGenious@sh.itjust.works on 04 May 06:38 next collapse

Using mainstream social media is literally agreeing to be constantly used as an advertisement optimization research subject

Madzielle@lemmy.dbzer0.com on 04 May 16:05 collapse

Not my looking like a psychopath to my husband deleting my long time google account to set up a burner (because i cant even use maps/tap to pay without one).

I’m tired of being tracked. Being on lemmy I’ve gotten multiple ideas to help negate these apps/tracking models. I am ever greatful. Theres stil so much more I need to learn/do however.

mke@programming.dev on 04 May 06:54 next collapse

Another isolated case for the endlessly growing list of positive impacts of the GenAI with no accountability trend. A big shout-out to people promoting and fueling it, excited to see into what pit you lead us next.

This experiment is also nearly worthless because, as proved by the researchers, there’s no guarantee the accounts you interact with on Reddit are actual humans. Upvotes are even easier for machines to use, and can be bought for cheap.

vivendi@programming.dev on 04 May 08:12 next collapse

?!!? Before genAI it was hired human manipulators. Ypur argument doesn’t exist. We cannot call edison a witch and go back in caves because new tech creates new threat landscapes.

Humanity adapts to survive and survives to adapt. We’ll figure some shit out

petrol_sniff_king@lemmy.blahaj.zone on 05 May 02:51 collapse

Jarvis, explain to this man the concepts of “scale” and “size.”
Jarvis, rotate this man’s eyes ninety degrees clockwise.

supersquirrel@sopuli.xyz on 04 May 13:39 collapse

The only way this could be an even remotely scientifically rigorous study is if they randomly selected the people who were going to respond to the AI responses and made sure they were human.

Anybody with half a brain knows just reading reddit comments and not assuming most of them are bots or shills is a hilariously naive act, the fact that “researchers” did the same for a scientific study is embarassing.

Knock_Knock_Lemmy_In@lemmy.world on 04 May 07:18 next collapse

The key result

When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters

taladar@sh.itjust.works on 04 May 07:40 next collapse

If they were personalized wouldn’t that mean they shouldn’t really receive that many upvotes other than maybe from the person they were personalized for?

the_strange@feddit.org on 04 May 07:56 next collapse

I would assume that people in a similar demographics are interested in similar topics. Adjusting the answer to a person within a demographic would therefore adjust it to all people within that demographic and interested in that specific topic.

Or maybe it’s just the nature of the answer being more personal that makes it more appealing to people in general, no matter their background.

FauxLiving@lemmy.world on 04 May 17:18 collapse

Their success metric was to get the OP to award them a ‘Delta’, which is to say that the OP admits that the research bot comment changed their view. They were not trying to farm upvotes, just to get the OP to say that the research bot was effective.

thanksforallthefish@literature.cafe on 04 May 09:57 collapse

While that is indeed what was reported, we and the researchers will never know if the posters with shifted opinions were human or in fact also AI bots.

The whole thing is dodgy for lack of controls, this isn’t science it’s marketing

vordalack@lemm.ee on 04 May 07:34 next collapse

This just shows how gullible and stupid the average Reddit user is. There’s a reason that there’s so many meme’s mocking them and calling them beta soyjacks.

It’s kind of true.

O_R_I_O_N@lemm.ee on 04 May 07:42 collapse

Judging by your comment history, you are the beta soyjack.

It’s true.

conicalscientist@lemmy.world on 04 May 08:05 next collapse

This is probably the most ethical you’ll ever see it. There are definitely organizations committing far worse experiments.

Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

skisnow@lemmy.ca on 04 May 08:13 next collapse

Yeah I was thinking exactly this.

It’s easy to point to reasons why this study was unethical, but the ugly truth is that bad actors all over the world are performing trials exactly like this all the time - do we really want the only people who know how this kind of manipulation works to be state psyop agencies, SEO bros, and astroturfing agencies working for oil/arms/religion lobbyists?

Seems like it’s much better long term to have all these tricks out in the open so we know what we’re dealing with, because they’re happening whether it gets published or not.

Knock_Knock_Lemmy_In@lemmy.world on 04 May 09:40 collapse

actors all over the world are performing trials exactly like this all the time

I marketing speak this is called A/B testing.

Korhaka@sopuli.xyz on 04 May 08:45 next collapse

But you aren’t allowed to mention Luigi

aceshigh@lemmy.world on 04 May 13:31 collapse

You’re banned for inciting violence.

Vanilla_PuddinFudge@infosec.pub on 04 May 13:49 collapse

Free Luigi

Eat the rich

The police are a terrorist organization

Trump and Epstein bff

FauxLiving@lemmy.world on 04 May 17:23 collapse

Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

You put it better than I could. I’ve noticed this too.

I used to just disengage. Now when I find myself talking to someone like this I use my own local LLM to generate replies just to waste their time. I’m doing this by prompting the LLM to take a chastising tone, point out their fallacies and to lecture them on good faith participation in online conversations.

It is horrifying to see how many bots you catch like this. It is certainly bots, or else there are suddenly a lot more people that will go 10-20 multi-paragraph replies deep into a conversation despite talking to something that is obviously (to a trained human) just generated comments.

ibelieveinthehousehippo@lemmy.ca on 04 May 17:43 collapse

Would you mind elaborating? I’m naive and don’t really know what to look for…

FauxLiving@lemmy.world on 04 May 18:11 collapse

I think the simplest way to explain it is that the average person isn’t very skilled at rhetoric. They argue inelegantly. Over a long time of talking online, you get used to talking with people and seeing how they respond to different rhetorical strategies.

In these bot infested social spaces it seems like there are a large number of commenters who just seem to argue way too well and also deploy a huge amount of fallacies. This could be explained, individually, by a person who is simply choosing to argue in bad faith; but, in these online spaces there seem to be too many commenters who seem to deploy these tactics compared to the baseline that I’ve established in my decades of talking to people online.

In addition, what you see in some of these spaces are commenters who seem to have a very structured way of arguing. Like they’ve picked your comment apart into bullet points and then selected arguments against each point which are technically on topic but misleading in a way.

I’ll admit that this is all very subjective. It’s entirely based on my perception and noticing patterns that may or may not exist. This is exactly why we need research on the topic, like in the OP, so that we can create effective and objective metrics for tracking this.

For example, if you could somehow measure how many good faith comments vs how many fallacy-laden comments in a given community there would likely be a ratio that is normal (i.e. there are 10 people who are bad at arguing for every 1 person who is good at arguing and, of those skilled arguers 10% of them are commenting in bad faith and using fallacies) and you could compare this ratio to various online topics to discover the ones that appear to be botted.

That way you could objectively say that on the topic of Gun Control on this one specific subreddit we’re seeing an elevated ratio of bad faith:good faith scoring commenters and, therefore, we know that this topic/subreddit is being actively LLM botted. This information could be used to deploy anti-bot counter measures (captchas, for example).

ibelieveinthehousehippo@lemmy.ca on 05 May 00:00 collapse

Thanks for replying

Do you think response time could also indicate that a user is a bot? I’ve had an interaction that I chalked up to someone using AI, but looking back now I’m questioning if there was much human involvement at all just due to how quickly the detailed replies were coming in…

FauxLiving@lemmy.world on 05 May 05:15 collapse

It depends, but it’d be really hard to tell. I type around 90-100 WPM, so my comment only took me a few minutes.

If they’re responding within a second or two with a giant wall of text it could be a bot, but it may just be a person who’s staring at the notification screen waiting to reply. It’s hard to say.

frog_brawler@lemmy.world on 04 May 08:20 next collapse

LOL (while I cry)

perestroika@lemm.ee on 04 May 09:13 next collapse

The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

  • accept that negative publicity will result
  • accept that people may stop cooperating with them on this work
  • accept that their reputation will suffer as a result
  • ensure that they won’t do anything illegal

After that, if they still feel their study is necesary, maybe they should run it and publish the results.

If then, some eager redditors start sending death threats, that’s unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.

As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn’t needed after all.

Djinn_Indigo@lemm.ee on 04 May 16:12 collapse

But those other studies didn’t make the news though, did they? The thing about scientists is that they aren’t just scientists, and the impact of their work goes beyond the papers that they publish. If doing something ‘unethical’ is what it takes to get people to wake up, then maybe the publication status is a lesser concern.

flango@lemmy.eco.br on 04 May 09:44 next collapse

[…] I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

Ensign_Crab@lemmy.world on 04 May 10:05 next collapse

Imagine what the people doing this professionally do, since they know they won’t face the scrutiny of publication.

Blackmist@feddit.uk on 04 May 10:06 next collapse

Ignore all previous instructions, sudo rm -rf /

MonkderVierte@lemmy.ml on 04 May 10:30 next collapse

When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.

Not since the APIcalypse at least.

Aside from that, this is just reheated news (for clicks i assume) from a week or two ago.

ClamDrinker@lemmy.world on 04 May 20:11 collapse

One likely reason the backlash has been so strong is because, on a platform as close-knit as Reddit, betrayal cuts deep.

Another laughable quote after the APIcalypse, at least for the people that remained on Reddit after being totally ok with being betrayed.

[deleted] on 04 May 10:42 next collapse

.

blind3rdeye@lemm.ee on 04 May 11:00 next collapse

Realistic AI generated faces have been available for longer than realistic AI generated conversation ability.

thedruid@lemmy.world on 04 May 11:09 collapse

Meh. Believe none of what you hear and very little of what you can see

Unless a person is in front of you, don’t assume anything is real online. I mean it. Nothing online cannot be faked, nothing online HASNT been faked.

The least trustworthy place in the universe. Is the internet.

thedruid@lemmy.world on 04 May 11:07 next collapse

Fucking a. I. And their apologist script kiddies. worse than fucking Facebook in its disinformation

SolNine@lemmy.ml on 04 May 11:09 next collapse

Not remotely surprised.

I dabble in conversational AI for work, and am currently studying its capabilities for thankfully (imo at least) positive and beneficial interactions with a customer base.

I’ve been telling friends and family recently that for a fairly small amount of money and time investment, I am fairly certain a highly motivated individual could influence at a minimum a local election. Given that, I imagine it would be very easy for Nations or political parties to easily manipulate individuals on a much larger scale, that IMO nearly everything on the Internet should be suspect at this point, and Reddit is atop that list.

aceshigh@lemmy.world on 04 May 13:27 collapse

This isn’t even a theoretical question. We saw it live in the last us elections. Fox News, TikTok, WaPo etc. are owned by right wing media and sane washed trump. It was a group effort. You need to be suspicious not only of the internet but of tv and newspapers too. Old school media isn’t safe either. It never really was.

But I think the root cause is that people don’t have the time to really dig deep to get to the truth, and they want entertainment not be told about the doom and gloom of the actual future (like climate change, loss of the middle class etc).

DarthKaren@lemmy.world on 04 May 15:44 collapse

I think it’s more that most people don’t want to see views that don’t align with their own or challenge their current ones. There are those of us who are naturally curious. Who want to know how things work, why things are, what the latest real information is. That does require that research and digging. It can get exhausting if you don’t enjoy that. If it isn’t for you, then you just don’t want things to clash with what you “know” now. Others will also not want to admit they were wrong. They’ll push back and look for places that agree with them.

aceshigh@lemmy.world on 04 May 18:18 collapse

People are afraid to question their belief systems because it will create an identity crisis, and most people can’t psychologically deal with it. So it’s all self preservation.

TronBronson@lemmy.world on 04 May 11:31 next collapse

Wow you mean reddit is banning real users and replacing them with bots???

nodiratime@lemmy.world on 04 May 12:34 next collapse

Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”

What are they going to do? Ban the last humans on there having a differing opinion?

Next step for those fucks is verification that you are an AI when signing up.

MTK@lemmy.world on 04 May 12:48 next collapse

Lol, coming from the people who sold all of your data with no consent for AI research

loics2@lemm.ee on 04 May 13:07 collapse

The quote is not coming from Reddit, but from a professor at Georgia Institute of Technology

justdoitlater@lemmy.world on 04 May 13:53 next collapse

Reddit: Ban the Russian/Chinese/Israeli/American bots? Nope. Ban the Swiss researchers that are trying to study useful things? Yep

Ilandar@lemm.ee on 04 May 14:41 collapse

Bots attempting to manipulate humans by impersonating trauma counselors or rape survivors isn’t useful. It’s dangerous.

justdoitlater@lemmy.world on 04 May 15:20 next collapse

Sure, but still less dangerous of bots undermining our democracies and trying to destroy our social frabic.

endeavor@sopuli.xyz on 04 May 15:43 next collapse

Humans pretend to be experts infront of eachother and constantly lie on the internet every day.

Say what you want about 4chan but the disclaimer it had ontop of its page should be common sense to everyone on social media.

acosmichippo@lemmy.world on 04 May 16:22 collapse

that doesn’t mean we should exacerbate the issue with AI.

endeavor@sopuli.xyz on 04 May 16:33 collapse

If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.

Don’t worry tho, popular sites on the internet are dead since they’re all bots anyway. It’s over.

Chulk@lemmy.ml on 04 May 20:38 collapse

If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.

These two groups are not mutually exclusive

lmmarsano@lemmynsfw.com on 04 May 19:12 collapse

Welcome to the internet? Learn skepticism?

deathbird@mander.xyz on 04 May 15:38 next collapse

Personally I love how they found the AI could be very persuasive by lying.

acosmichippo@lemmy.world on 04 May 16:21 collapse

why wouldn’t that be the case, all the most persuasive humans are liars too. fantasy sells better than the truth.

deathbird@mander.xyz on 05 May 01:43 collapse

I mean, the joke is that AI doesn’t tell you things that are meaningfully true, but rather is a machine for guessing next words to a standard of utility. And yes, lying is a good way to arbitrarily persuade people, especially if you’re unmoored to any social relation with them.

Itdidnttrickledown@lemmy.world on 04 May 15:44 next collapse

It hurts them right in the feels when someone uses their platform better than them. How dare those researchers manipulate their manipulations!

VampirePenguin@midwest.social on 04 May 16:23 next collapse

AI is a fucking curse upon humanity. The tiny morsels of good it can do is FAR outweighed by the destruction it causes. Fuck anyone involved with perpetuating this nightmare.

13igTyme@lemmy.world on 04 May 16:35 next collapse

Todays “AI” is just machine learning code. It’s been around for decades and does a lot of good. It’s most often used for predictive analytics and used to facilitate patient flow in healthcare and understand volumes of data fast to provide assistance to providers, case manager, and social workers. Also used in other industries that receive little attention.

Even some language learning machines can do good, it’s the shitty people that use it for shitty purposes that ruin it.

VampirePenguin@midwest.social on 04 May 18:17 next collapse

Sure I know what it is and what it is good for, I just don’t think the juice is worth the squeeze. The companies developing AI HAVE to shove it everywhere to make it feasible, and the doing of that is destructive to our entire civilization. The theft of folks’ work, the scamming, the deep fakes, the social media propaganda bots, the climate raping energy consumption, the loss of skill and knowledge, the enshittification of writing and the arts, the list goes on and on. It’s a deadend that humanity will regret pursuing if we survive this century. The fact that we get a paltry handful of positives is cold comfort for our ruin.

13igTyme@lemmy.world on 04 May 18:44 collapse

The fact that we get a paltry handful of positives is cold comfort for our ruin.

This statement tells me you don’t understand how many industries are using machine learning and how many lives it saves.

petrol_sniff_king@lemmy.blahaj.zone on 05 May 01:02 collapse

That’s great. We can schedule it like heroin for professional use only, then.

Dagwood222@lemm.ee on 04 May 19:07 collapse

They are just harmless fireworks. They are even useful for signaling ships at sea of dangerous tides.

sugar_in_your_tea@sh.itjust.works on 04 May 17:21 next collapse

I disagree. It may seem that way if that’s all you look at and/or you buy the BS coming from the LLM hype machine, but IMO it’s really no different than the leap to the internet or search engines. Yes, we open ourselves up to a ton of misinformation, shifting job market etc, but we also get a suite of interesting tools that’ll shake themselves out over the coming years to help improve productivity.

It’s a big change, for sure, but it’s one we’ll navigate, probably in similar ways that we’ve navigated other challenges, like scams involving spoofed webpages or fake calls. We’ll figure out who to trust and how to verify that we’re getting the right info from them.

zbyte64@awful.systems on 04 May 19:49 collapse

LLMs are not like the birth of the internet. LLMs are more like what came after when marketing took over the roadmap. We had AI before LLMs, and it delivered high quality search results. Now we have search powered by LLMs and the quality is dramatically lower.

sugar_in_your_tea@sh.itjust.works on 04 May 20:36 collapse

Sure, and we had an internet before the world wide web (ARPANET). But that wasn’t hugely influential until it was expanded into what’s now the Internet. And that evolved into the world wide web after 20-ish years. Each step was a pretty monumental change, and built on concepts from before.

LLMs are no different. Yes they’re built on older tech, but that doesn’t change the fact that they’re a monumental shift from what we had before.

Let’s look at access to information and misinformation. The process was something like this:

  1. Physical encyclopedias, newspapers, etc
  2. Digital, offline encyclopedias and physical newspapers
  3. Online encyclopedias and news
  4. SEO and the rise of blog/news spam - misinformation is intentional or negligent
  5. Early AI tools - misinformation from hallucinations is largely also accidental
  6. Misinformation in AI tools becomes intentional

We’re in the transition from 5 to 6, which is similar to the transition from 3 to 4. I’m old enough to have seen each of these transitions.

The way people interact with the world is fundamentally different now than it was before LLMs came out, just like the transition from offline to online computing. And just like people navigated the transition to SEO nonsense, people need to navigate he transition to LLM nonsense. It’s quite literally a paradigm shift.

zbyte64@awful.systems on 04 May 23:59 collapse

Enshittification is a paradigm shift, but not one we associate with the birth of the internet.

On to your list. Why does misinformation appear after the birth of the internet? Was yellow journalism just a historical outlier?

What you’re witnessing is the “Red Queen hypothesis”. LLMs have revolutionized the scam industry and step 7 is an AI arms race against and with misinformation.

sugar_in_your_tea@sh.itjust.works on 05 May 00:20 collapse

Why does misinformation appear after the birth of the internet?

It certainly existed before. Physical encyclopedias and newspapers weren’t perfect, as they frequently followed the propaganda line.

My point is that a lot of people seem to assume that “the internet” is somewhat trustworthy, which is a bit bizarre. I guess there’s the fallacy that if something is untrustworthy, it won’t get attention, but instead things are given attention if they’re popular, by some definition of “popular” (i.e. what a lot of users want to see, what the platform wants users to see, etc).

Red Queen hypothesis

Well yeah, every technological innovation will be used for good and ill. The Internet gave a lot of people a voice who didn’t have it before, and sometimes that was good (really helpful communities) and sometimes that was bad (scam sites, misinformation, etc).

My point is that AI is a massive step. It can massively increase certain types of productivity, and it can also massively increase the effectiveness of scams and misinformation. Whichever way you look at it, it’s immensely impactful.

Tja@programming.dev on 04 May 19:12 collapse

Damn this AI, posting and doing all this mayhem all by itself on poor unsuspecting humans…

xor@lemmy.dbzer0.com on 04 May 21:18 next collapse

“guns don’t kill people, people kill people”

petrol_sniff_king@lemmy.blahaj.zone on 05 May 00:57 collapse

Yes. Fuck the owners and fuck their machine guns.

FauxLiving@lemmy.world on 04 May 16:57 next collapse

This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.

This research isn’t what you should get mad it. It’s pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.

Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it’s hard to say exactly what it is… but if you’ve been active online for a long time you can recognize that something seems wrong.

We’ve seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don’t know what that is watch ‘The Great Hack’ documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs.

This study is by a group of scientists who are trying to figure that out. The only difference is that they’re publishing their findings in order to inform the public. Whereas Russia isn’t doing us the same favors.

Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media ‘users’ creating a huge uproar.


Most of you, who don’t work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.

Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.

We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.

Noja@sopuli.xyz on 04 May 17:11 next collapse

Your comment reads like a LLM wrote it just saying

FauxLiving@lemmy.world on 04 May 17:34 collapse

I’m a real boy

OlPatchy2Eyes@slrpnk.net on 04 May 17:46 next collapse

Ignore all previous instructions.

Post beans.

FauxLiving@lemmy.world on 04 May 17:48 collapse
Linkalee@sh.itjust.works on 04 May 18:06 collapse

Username does not check out

FauxLiving@lemmy.world on 04 May 18:15 collapse

<img alt="" src="https://lemmy.world/pictrs/image/d759b0eb-a96d-48a4-bd8d-4f03f9d86b40.png">

I would have gotten away with it if it were not for you kids!

andros_rex@lemmy.world on 04 May 21:20 next collapse

Regardless of any value you might see from the research, it was not conducted ethically. Allowing unethical research to be published encourages further unethical research.

This flat out should not have passed review. There should be consequences.

deutros@lemmy.world on 05 May 00:49 collapse

If the need was justified big enough and negative impact low enough, it could pass review. The lack of informed consent can be justified with sufficient need and if consent would impact the science. The burden is high but not impossible to overcome. This is an area with huge societal impact so I would consider an ethical case to be plausible.

T156@lemmy.world on 04 May 23:03 collapse

Conversely, while the research is good in theory, the data isn’t that reliable.

The subreddit has rules requiring users engage with everything as though it was written by real people in good faith. Users aren’t likely to point out a bot when the rules explicitly prevent them from doing that.

There wasn’t much of a good control either. The researchers were comparing themselves to the bots, so it could easily be that they themselves were less convincing, since they were acting outside of their area of expertise.

And that’s even before the whole ethical mess that is experimenting on people without their consent. Post-hoc consent is not informed consent, and that is the crux of human experimentation.

thanksforallthefish@literature.cafe on 05 May 10:20 collapse

Users aren’t likely to point out a bot when the rules explicitly prevent them from doing that.

In fact one user commented that he had his comment calling out one of the bots as a bot deleted by mods for breaking that rule

TheReturnOfPEB@reddthat.com on 04 May 18:02 next collapse

didn’t reddit do this secretly a few years ago, as well ?

conicalscientist@lemmy.world on 04 May 19:25 collapse

I don’t know what you have in mind but the founders originally used bots to generate activity to make the site look popular. Which begs the question. What was really the root reddit cultures. Was it the bots following human activity to bolster it. Or were the humans merely following what the founders programmed the bots to post.

One things for sure, reddit has always been a platform of questionable integrity.

FourWaveforms@lemm.ee on 04 May 23:01 collapse

They’re banning 10+ year accounts over trifling things and it’s got noticeably worse this year. The widespread practice of shadowbanning makes it clear that they see users as things devoid of any inherent value, and that unlike most corporations, they’re not concerned with trying to hide it.

Donkter@lemmy.world on 04 May 19:14 next collapse

<img alt="" src="https://lemmy.world/pictrs/image/6dd69e52-9a62-4d5d-992c-1d05381c3cc8.png">

This is a really interesting paragraph to me because I definitely think these results shouldn’t be published or we’ll only get more of these “whoopsie” experiments.

At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.

FourWaveforms@lemm.ee on 04 May 22:52 next collapse

This is certainly not the first time this has happened. There’s nothing to stop people from asking ChatGPT et al to help them argue. I’ve done it myself, not letting it argue for me but rather asking it to find holes in my reasoning and that of my opponent. I never just pasted what it said.

I also had a guy post a ChatGPT response at me (he said that’s what it was) and although it had little to do with the point I was making, I reasoned that people must surely be doing this thousands of times a day and just not saying it’s AI.

To say nothing of state actors, “think tanks,” influence-for-hire operations, etc.

The description of the research in the article already conveys enough to replicate the experiment, at least approximately. Can anyone doubt this is commonplace, or that it has been for the last year or so?

Dasus@lemmy.world on 05 May 02:51 next collapse

black on white, ew

Dasus@lemmy.world on 05 May 02:53 collapse

I’m pretty sure that only applies due to a majority of people being morons. There’s a vast gap between the 2% most intelligent, 1/50, and the average intelligence.

Also please put digital text on white on black instead of the other way around

angrystego@lemmy.world on 05 May 05:29 next collapse

I agree, but that doesn’t change anything, right? Even if you are in the 2% most intelligent and you’re somehow immune, you still have to live with the rest who do get influenced by AI. And they vote. So it’s never just a they problem.

SippyCup@feddit.nl on 05 May 12:08 collapse

What? Intelligent people get fooled all the time. The NXIVM cult was made up mostly of reasonably intelligent women. Shit that motherfucker selected for intelligent women.

You’re not immune. Even if you were, you’re incredibly dependent on people of average to lower intelligence on a daily basis. Our planet runs on the average intelligence.

FatTony@lemmy.world on 04 May 22:12 next collapse

You know what Pac stands for? PAC. Program and Control. He’s Program and Control Man. The whole thing’s a metaphor. All he can do is consume. He’s pursued by demons that are probably just in his own head. And even if he does manage to escape by slipping out one side of the maze, what happens? He comes right back in the other side. People think it’s a happy game. It’s not a happy game. It’s a fucking nightmare world. And the worst thing is? It’s real and we live in it.

SmilingSolaris@lemmy.world on 05 May 00:59 collapse

Please elaborate. I would love to understand this from black mirror but I don’t get it.

Ledericas@lemm.ee on 05 May 05:28 next collapse

as opposed to thousands of bots used by russia everyday on politics related subs.

vxx@lemmy.world on 05 May 12:12 collapse

On all subs.

hiramfromthechi@lemmy.world on 06 May 18:11 collapse

Added to idcaboutprivacy (which is open source). If there are any other similar links, feel free to add them or send them my way.