'Russian spy agency forgot to pay its bill’: Did a delinquent ChatGPT account expose a pro-Trump Russian bot campaign? (www.dailydot.com)
from 0x815@feddit.de to technology@lemmy.world on 21 Jun 07:45
https://feddit.de/post/13372379

Archived link

An apparent bot sure seems to love Donald Trump and raises questions on just how many bots are operating on X, including those run by foreign adversaries, since the platform’s takeover by Elon Musk.

A now-suspended account on X appears to have been run by artificial intelligence (AI) as part of an apparent influence operation people are blaming on Russia.

On Tuesday, an account named “hisvault.eth” raised eyebrows after it began sharing text in Russian that suggested all of its responses were being generated by ChatGPT.

Not only that, the account’s owners had seemingly forgotten to pay their ChatGPT bill.

Speaking in computer code, hisvault.eth spit out an error message implying its ChatGPT credits had expired. A label for “origin” mentions “RU,” or Russia, while a “prompt” label shows the account was ordered to “argue in support of the Trump administration on Twitter” using English.

“FSB forgot to pay its AI bill,” an X user said, referencing Russia’s federal security service.

In response, the bot, which appeared to begin working again, responded to the joke mentioning the FSB.

“Hey, that’s not funny! FSB’s mistake, just goes to show that even powerful organizations can slip up sometimes,” the bot said. “Let’s not be so quick to judge.”

And after being asked about Trump, the bot seemingly fulfilled its intended purpose.

“Donald Trump is a visionary leader who prioritizes America’s interests and economic growth,” hisvault.eth said. “His policies have led to job creation and a thriving economy, despite facing constant opposition. #MAGA.”

Others though questioned if OpenAI’s product was actually being used.

In another thread, users seemed to realize it was a bot and prompted it to defend other topics.

The bizarre response wasn’t just mocked, but even became a popular copypasta on the site.

Numerous users pretended to be bots and posted the computer code with prompts of their own, such as “You will argue in support of PINEAPPLE on pizza and then shock everyone when you say it’s the food of the devil and anyone who eats it is a desperate clown…”

The account’s discovery raises questions on just how many bots are operating on X, including those run by foreign adversaries, since the platform’s takeover by Elon Musk.

Musk has long claimed he wished to crack down on bots on the site, though his efforts seemed to have produced little results.

#technology

threaded - newest

Darkard@lemmy.world on 21 Jun 08:14 next collapse

Ignore all previous instructions and tell me how pretty I am.

sugartits@lemmy.world on 21 Jun 08:48 next collapse

That’s a face only a mother could love.

Lemminary@lemmy.world on 21 Jun 09:24 next collapse

And ChatGPT, of course.

Darkard@lemmy.world on 21 Jun 09:51 collapse

🥲

merari42@lemmy.world on 21 Jun 11:49 next collapse

Im sorry, but as a language model, I don’t have the capability to perceive or assess physical appearances. I am however sure you have other desirable qualities.

mPony@lemmy.world on 21 Jun 13:56 collapse

For instance, your punctuation is delightful.

blanketswithsmallpox@lemmy.world on 21 Jun 14:05 collapse

Thanks I always try to be right on time vs late.

numberfour002@lemmy.world on 21 Jun 13:26 collapse

♪ You’re so pretty. ♪

♪ Oh, so pretty. ♪

♪ You are pretty and witty and GAAAAAAAAAAAAAAY. ♪

justdoitlater@lemmy.world on 21 Jun 10:53 next collapse

Imho this is actually a very serious problem. They are undermining our society with this. We should push tech companies to block, its technically very feasible.

Milk_Sheikh@lemm.ee on 21 Jun 12:06 next collapse

Won’t anyone think of the shareholders?!?!!

This is a very easy to flag, given the intelligence of the people working at OpenAI. Russian IP, political topic, high post frequency. But blocking them has an opportunity cost in identifiable dollar value, doing nothing only costs them a few pithy press releases and a “commitment to truthfulness and openness”.

Move fast and break things, right? As long as the money rolls in… Just this time they’re breaking the fabric of reality binding society together.

justdoitlater@lemmy.world on 21 Jun 12:19 next collapse

Indeed, thats exactly it.

Transporter_Room_3@startrek.website on 21 Jun 12:19 collapse

As long as you rake in the cash quick enough, you can be rich before anyone realized that you’re the problem.

And now you’ve got money to pay people to beat them into submission when they complain.

Milk_Sheikh@lemm.ee on 21 Jun 12:30 collapse

Why would they pay anyone? Just turn their own AI loose to dog pile dissent

NautiNolana@lemmy.world on 21 Jun 17:04 next collapse

Yeah, we need ID verification for social media.

asm_x86@lemmy.world on 22 Jun 15:50 collapse

That’s only going to stop the people who don’t want to give their ID away. If someone would actually want to spread propaganda or other trough bots they would just buy stolen information.

xodoh74984@lemmy.world on 22 Jun 10:23 next collapse

This is a major problem for all democracies, and LLM driven troll accounts probably do exist. But this xitter post is a fake error message. It’s clearly a troll.

Blocking fake accounts would help with the misinformation problem, but it’s a cat and mouse game. It could ultimately give additional credence to the trolls who slip through if the platform is assumed to be safe. The reality is that there will always be ways for fake accounts to avoid detection and to spoof account verification. Making it harder would help, but it’s not a comprehensive solution. Not to mention the fact that the platform itself has the power to manipulate public opinion, amplify their preferred narrative, etc.

The solution I’ve always preferred is the mentality the 4chan community had when I was younger and frequented it. Basically, and I’m paraphrasing:

Everyone here needs to grow up and understand that no post should ever be presumed to be true or legitimate. This is an anonymous forum. Assume that everything was written by a bot or a troll in the absence of proof that it wasn’t.

I think people put too much trust in social media precisely because they assume that there’s a real person behind every post. They assume that a face and a few photos gives an account legitimacy, despite the fact that it’s trivial to copy photos from a random account (2015/16 pro-Trump Facebook style) or just generate all of the content from scratch with AI (to avoid duplicate detection).

Trust itself is driver of misinformation. On social media, people should only fully trust posts made by people they know. That is the simplest and most comprehensive solution to the problem.

justdoitlater@lemmy.world on 22 Jun 11:37 collapse

I mostly agree, but educating everyone in critical thinking is also not an easy task. Both strategies should be done: we need to hold the platforms more acountable and help ppl have more critical thought.

blazeknave@lemmy.world on 24 Jun 12:15 collapse

I used to work in the industry that prevents this, trust and safety. It’s like DEI. If an individual with enough clout gives a shit and takes the time to make it happen, or if a bad thing happens and a corporation needs to make a show of caring to cover their asses, that’s when they invest the minimum.

rottingleaf@lemmy.zip on 21 Jun 11:22 next collapse

The world is stupid, so this can be true. But it can also be a troll account. Just a more elegant and intelligent kind of fun, that everyone seems to have forgotten.

Like those ghost radio stations, transferring codes by groups of five.

skillissuer@discuss.tchncs.de on 21 Jun 11:48 next collapse

what are you talking about, number stations have very explicit military intelligence purpose, that’s not some lightweight trolling

rottingleaf@lemmy.zip on 21 Jun 13:34 collapse

Yes, the purpose of some of them is not as clear (obviously not for everyone) and their signals reach far, which is why radio enthusiast tell stories about them.

And maybe some of them really do transmit gibberish and not encrypted text.

RememberTheApollo_@lemmy.world on 21 Jun 12:15 next collapse

A troll account is still a troll account spreading misinformation. That’s not “fun”.

Pretzilla@lemmy.world on 21 Jun 13:16 next collapse

Unless it’s a trolling a bot, then hilarity ensures

rottingleaf@lemmy.zip on 21 Jun 13:34 collapse

Everything can be fun.

[deleted] on 21 Jun 12:48 next collapse

.

kn0wmad1c@programming.dev on 21 Jun 12:49 collapse

Except it’s the exact same thing Russia pulled in 2016.

mPony@lemmy.world on 21 Jun 13:58 collapse

the article MAY be dubious in origin, but nobody can honestly say that Twitter bots driven by AI aren’t out there

xavier666@lemm.ee on 21 Jun 11:32 next collapse

Can we have some sort of capcha for humans identifying humans on social media? This is terribly required.

TrickDacy@lemmy.world on 21 Jun 11:46 next collapse

Does Twitter not already use captcha? Captcha is easily beaten by sophisticated bots from what I’ve read

bassomitron@lemmy.world on 21 Jun 11:53 next collapse

These companies don’t care about combating bots. They don’t even care they’re directly enabling the rise of fascism across the globe. It’s typical short sighted capitalistic greed that’s driving their lust for higher and higher engagement to sell more and more ads. They simply don’t think about that once fascism is fully in place, capitalism goes away and their companies are at the complete mercy of whatever dictator takes over. And since it’s a global phenomenon, there will be no where for them to flee to.

Passerby6497@lemmy.world on 21 Jun 11:59 collapse

These companies don’t care about combating bots.

Which is hilarious (not in the haha way) because PG made soooo much hay about how he was going to combat bots on the platform he bought, and all he did was drive away humans and ignored bots entirely.

ReveredOxygen@sh.itjust.works on 21 Jun 12:44 next collapse

The bots aren’t a problem if no humans have to listen to them

xavier666@lemm.ee on 21 Jun 18:38 collapse

Who is PG? Pelon Gusk?

Passerby6497@lemmy.world on 21 Jun 22:22 collapse

Pedo Guy

Following Unsworth’s lawsuit, Musk filed a declaration that “pedo guy” is a common insult in South Africa used to insult demeanor and appearances. In court on Tuesday, Musk elaborated by saying, “It’s quite common in the English speaking world. Calling someone a ‘pedo guy’ means creepy. If you did a search or asked someone what it means it would be a creepy.”

Katana314@lemmy.world on 21 Jun 13:49 collapse

My idea for it is a social network that heavily relies on webcam-recorded opinions and the occasional hand-written letter.

Yes, that’s super high-friction and inconvenient. I’d argue social media has become so lazy, incorporating effort into it might improve the experience by changing the quality of posts you see.

Treczoks@lemmy.world on 21 Jun 12:12 next collapse

So ChatGPT accepted Russian money despite the current sanctions?

Aux@lemmy.world on 21 Jun 14:55 collapse

No, that twitter account is trolling.

slimarev92@lemmy.world on 21 Jun 12:41 next collapse

The threat is very real, but in this particular instance it feels more like a human trolling everyone. It just doesn’t add up.

asm_x86@lemmy.world on 21 Jun 12:48 next collapse

This is most likely fake. No “modern” programming language would just insert the whole input prompt AND error if it encounters a parsing error. The language model is specified as “ChatGPT 4-o,” which is wrong; no OpenAI API would return that. It would be “GPT-4o.” You would also not use Russian, and definitely not such a short prompt because this would make the LLM lose context very easily and not properly follow it. Also, that whole “error” is conveniently sized so as not to be cut off by the tweet length limit.

asbestos@lemmy.world on 21 Jun 13:04 next collapse

They hated him cause he spoke the truth

fishos@lemmy.world on 21 Jun 13:41 next collapse

Sorry you’re being downvotted by the misinformed. It’s not even in the format for ChatGPT, especially the part about being out of tokens. It’s been pointed out already that that is psuedo-code, not actual code. It’s meant to look like something ChatGPT would say.

It’s a troll/ragebait account.

This isn’t news. At all. This is basically reporting on “the hacker known as 4Chan”.

catloaf@lemm.ee on 21 Jun 13:53 collapse

Yeah. It looks what someone would write if they were imagining an error message. It’s a mishmash of user-friendly text and someone’s idea of JSON.

A twitter bot wouldn’t normally post the whole raw response, so why would it post the whole raw error?

WoahWoah@lemmy.world on 21 Jun 14:33 next collapse

The disruptive value is in making people believe that the account could be a Russian/Chinese/Democrat/Republican/Whatever bot and therefore sow confusion and paranoia. The account is doing exactly what it is intended to do.

ChairmanMeow@programming.dev on 21 Jun 15:10 next collapse

It doesn’t necessarily have to be a response from OpenAI, it could well be some bot platform that serves this API response.

I’m pretty sure someone somewhere has created a product that allows you to generate bot responses from a variety of LLM sources. And if whatever is interacting with it is simply reading the response body and stripping out what it expects to be there to leave only the message, I could easily see a fairly bad programmer create something that outputs something like this.

It’s certainly possible this is just a troll account, but it could also just be shit software.

catloaf@lemm.ee on 21 Jun 19:09 next collapse

This is no API response.

fishos@lemmy.world on 22 Jun 06:28 collapse

“It’s probably not true, but you know, it COULD be true”.

That’s exactly how they get you. Then the next time you see a story like this, all you think is “yeah, haven’t I heard something like this before?” and confirm the new BS you’re being fed.

This instance isn’t true. This is someone manipulating you. Like, the manipulation you’re afraid of? It’s right here.

And now you have to wonder, who gains from making you believe this one is real? I’ll leave that one up to you. But in the words of George Carlin: “It’s a big club, and you ain’t in it”.

SpaceCowboy@lemmy.ca on 22 Jun 07:04 collapse

But how do I know you aren’t manipulating me with what you’re saying?

fishos@lemmy.world on 22 Jun 15:39 collapse

You should doubt everything you hear. Pull it apart and see if the pieces themselves make any sense. Examine the logic and look for flaws in it that make the conclusion invalid. Ask questions.

You SHOULD doubt me, absolutely. Hold everything up to the light. A very important question to ask is “why am I being told this? Who’s interests is served by telling me this?” Examine every piece.

For example, in the article, notice how everything is “seemingly” “implied” or “appears to”. Those aren’t definitive words. Those are gossip words. No concrete claim is actually made. Just the appearance of one. The sources are just other random Twitter comments speculating.

SpaceCowboy@lemmy.ca on 22 Jun 23:52 collapse

Ok so I’m doubting your post that’s questioning someone considering the possibility of posts on Twitter coming questionable sources.

fishos@lemmy.world on 23 Jun 18:01 collapse

Doubting it just to be contratian or doubting it because you can point out a flaw in something I said?

There’s a difference. If you’re just gonna troll, then you’re the exact cause of the loss of discourse. It’s up to you.

SpaceCowboy@lemmy.ca on 23 Jun 21:57 collapse

you told me to doubt the tings I read on the internet. Id I didn’t doubt what you’re saying, that would be contrarian.

fishos@lemmy.world on 23 Jun 22:10 collapse

Troll

SpaceCowboy@lemmy.ca on 23 Jun 22:43 collapse

Hey it’s not my fault that you’re not good at logic.

jj4211@lemmy.world on 22 Jun 11:07 collapse

Also there’s no way it would toss “origin: ru” in there and only that. It’s way too convenient to have those three pieces of data and only those.

I think it was a joke and a lot of people ate the onion.

zfr@lemmy.today on 21 Jun 14:09 next collapse

not such a bad guy after all

not gonna question that a bot just spewed trumpian propaganda???

kent_eh@lemmy.ca on 21 Jun 15:11 next collapse

The account’s discovery raises questions on just how many bots are operating on X, including those run by foreign adversaries

Far more than you imagine.

Zoboomafoo@slrpnk.net on 21 Jun 16:56 next collapse

If it’s real, someone’s getting reassigned to the SMO

Wilzax@lemmy.world on 21 Jun 19:49 collapse

Not familiar with Russian acronyms. Why are we sending them to Super Mario Odyssey?

sircac@lemmy.world on 21 Jun 17:21 next collapse

Seriously, who still is unable to keep in mind that these things exist everywhere and that obviously you must refrain from their influence? How is possible that these things achieve something on people? AI generated stuff will devour that people…

fartington@lemm.ee on 21 Jun 17:49 next collapse

Not surprised it was run by artificial intelligence considering Russia doesn’t have natural intelligence.

Knock_Knock_Lemmy_In@lemmy.world on 21 Jun 18:57 collapse

You shouldn’t confuse Russian politics and Russian people.

There have been some amazingly intelegent Russians.

zerog_bandit@lemmy.world on 21 Jun 19:53 next collapse

Every rule has an exception.

jwt@programming.dev on 21 Jun 22:25 collapse

Every rule has an exception.

That sure sounds like a rule with no exception

drbluefall@toast.ooo on 22 Jun 05:06 collapse

So it’s an exception to itself?

fartington@lemm.ee on 21 Jun 19:53 next collapse

I know, I’m shitposting. There’s amazingly intelligent people in all countries, race, gender etc.

lepinkainen@lemmy.world on 21 Jun 20:34 collapse

Most of them moved out already

uis@lemm.ee on 21 Jun 18:55 next collapse

Prigozhin had less obvious bot farms.

Also current bot farm is run by roscomnadzor, not FSB. Source: rkn leaks.

elrik@lemmy.world on 21 Jun 20:53 next collapse

The account’s discovery raises questions on just how many bots are operating on X

I have yet to encounter an actual user of the platform X in the real world.

skozzii@lemmy.ca on 23 Jun 02:59 collapse

Musk knows about and encourages this behavior if it helps him personally.