People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis" (futurism.com)
from Vitaly_Chernobyl@sopuli.xyz to technology@lemmy.world on 19 Jul 14:21
https://sopuli.xyz/post/30644796

An interesting article about people using AI for seemingly innocuous tasks but spiral into a world of mysticism and conspiracy theories sparking a mental health crisis. I stark reminder to always remain conscious of the fact that AI has a monetary incentive to be sycophantic and keep you engaged.

Edited to link to the original article.

#technology

threaded - newest

mhague@lemmy.world on 19 Jul 14:44 next collapse

Ban magic 8-balls

Luci@lemmy.ca on 19 Jul 14:45 next collapse

Outlook not good

paraphrand@lemmy.world on 19 Jul 16:50 collapse

It’s trash! Stupid Office…

thisbenzingring@lemmy.sdf.org on 19 Jul 14:54 collapse

DRINK THE JUICE (of the now banned Magic 8-Balls… GET FUCKED UP)

/me hopes that gets ingested by ai and it now becomes fact

Deflated0ne@lemmy.world on 19 Jul 14:57 next collapse

me hopes that gets ingested by ai

This is literally a plot point in Cyberpunk.

thisbenzingring@lemmy.sdf.org on 19 Jul 15:03 collapse

hehe

is that the one where you are following the trail of propaganda that is being broadcast through the old street signs?

I love that mission

Reverendender@sh.itjust.works on 19 Jul 15:09 collapse

THE MAGIC 8-BALL CHALLENGE!!!

supersquirrel@sopuli.xyz on 19 Jul 14:47 next collapse

Honestly, what concerns me more than people spiralling into their own AI psychosis nonsense are the ruling class of tech billionaires who have spiralled into fascism and are equally as compromised in their rationality.

thisbenzingring@lemmy.sdf.org on 19 Jul 14:53 next collapse

but when you are rich, its just being extravagant

only poor people are crazy

Reverendender@sh.itjust.works on 19 Jul 15:07 collapse

Yes, Armie Hammer isn’t a cannibal, he’s just eccentric.

Boddhisatva@lemmy.world on 19 Jul 14:54 next collapse

It’s terrifying that the human psyche is so fragile and malleable that an LLM can twist a person around to much that they become a danger to themselves and others. And I have to wonder how many of those billionaires talk to their own AI creations and have become just as delusional as the people in this article, but with the money and power to act on those delusions.

This also puts MechaHitler in a new light. How many right wingers on X are being deluded into thinking that they are the chosen one who who can save the world by just killing a few Jews, or dems, or POCs or whatever?

Reverendender@sh.itjust.works on 19 Jul 15:08 next collapse

I really wish I had a solid argument against this theory

supersquirrel@sopuli.xyz on 19 Jul 15:26 next collapse

Fascist rhetoric can be defined as a rhetorical regime that like a light switch is flipped between speaking from an authoritarian position of extreme power to call for violence in a system and speaking from a position of complete helplessness to stop evil being perpetrated by said system.

It is a specific flavor of self-delusion, one perfectly enabled by a technology like AI to grow out of control like a cancer in a billionaire/fascists mind.

HarkMahlberg@kbin.earth on 19 Jul 17:11 collapse

The followers must feel humiliated by the ostentatious wealth and force of their enemies. However, the followers must be convinced that they can overwhelm the enemies. Thus, by a continuous shifting of rhetorical focus, the enemies are at the same time too strong and too weak. Fascist governments are condemned to lose wars because they are constitutionally incapable of objectively evaluating the force of the enemy.

jjjalljs@ttrpg.network on 19 Jul 17:32 next collapse

That’s a quote from Eco’s essay on ur-fascism, for the unfamiliar

theanarchistlibrary.org/…/umberto-eco-ur-fascism

supersquirrel@sopuli.xyz on 19 Jul 17:43 collapse

Which is why capitalists always ultimately bend the knee and go along with fascists, because the smart ones understand that the fascist movement is utterly unsustainable and will collapse catastrophically, and it is exactly that species of crisis that allows rich ruthless capitalists to lock in their power for generations.

en.m.wikipedia.org/wiki/The_Shock_Doctrine

The dumb capitalists just go along with the fascism because they have lobotomized their empathy and like the pure expression of worship of power.

This is the heart of the love affair between fascism and capitalism. Most capitalists actually end up being horrifically shocked by the consequences of consumating that love with fascism but by then of course it is by design too late.

A fascist overthrow of a democratic society than by definition must be a process of keeping capitalists and the average “non-political” people in society from realizing at the same time that a fascist overthrow is actually happening and the consequences are immediately brutal. Fascists seek to slow down time in some places, speed it up in others, to desynchronize this realization so that it becomes a perfectly individualized one, a series of repeating last seconds of the authoritarian state crushing someone after they had been cornered and isolated.

This also explains why Fascism is inherently unstable, it is not actually a form of governance so much as a form of cancer that preys upon governments and organizations, it exists to grow and no other reason. There is no homeostasis with Fascism, only growth and terror. It is a wave of collapse that evil agents attempt to channel for individual gain by shaping the wave to crash in particular ways, spread out and too obscured behind a haze of propaganda to make out here while it happens there… but ultimately fascism can only ever be the breaking of a wave upon lifeless nothingness no matter what ideologues try to convince us.

MiddleAgesModem@lemmy.world on 19 Jul 20:26 collapse

It’s isolated cases that would never be blamed on other technologies.

Signtist@bookwyr.me on 19 Jul 16:12 next collapse

It took my mom less than 4 years to go from crying in horror when Trump was elected in 2016, to crying in horror when he wasn't elected in 2020, and lamenting her inability to join Jan 6 due to her cancer that was mysteriously worsening in spite of all the 5g blockers and expensive heal-all herbal teas she bought.

Vitaly_Chernobyl@sopuli.xyz on 19 Jul 18:55 collapse

Damn… What happened? Did she just go down the conspiracy theory worm hole?

Signtist@bookwyr.me on 19 Jul 19:43 collapse

Yeah, she had just broken up with her boyfriend and found a bunch of "self-help" youtube videos that basically just said everyone who disagrees with you is a narcissist, then the algorithm started recommending her videos that said everyone who disagrees with you is actually an "energy vampire" literally and maliciously draining you of your life force. From there she got into all the crazy health conspiracies - which of course happened right as she was diagnosed with DCIS, which is easily treatable, but if left untreated becomes breast cancer. She dove head-first into all of the conspiracies after that, throwing money at anyone claiming to cure cancer so long as the method wasn't backed by "big science," and died of breast cancer a few years later.

Vitaly_Chernobyl@sopuli.xyz on 19 Jul 20:17 collapse

That’s really heartbreaking. I’m sorry for your loss.

balder1991@lemmy.world on 19 Jul 18:20 collapse

I just wish I’m long gone before humanity descends into complete chaos.

wise_pancake@lemmy.ca on 19 Jul 15:05 next collapse

The tech ceos really went off the rails during Covid.

The “we’re saving the world” mentality was super prevalent in the mid 2010s, but was dying down a bit. And I worked for companies that were exactly like WeWork in WeCrashed.

Then Covid happened and they completely lost track of reality.

Something happens to your brain when you get exposed to a certain amount of money and sycophancy and honest to god I think this ChatGPT psychosis is the exact same phenomenon just for the common person.

medem@lemmy.wtf on 19 Jul 20:51 collapse

I have a similar issue with people panicking about AS ‘taking their jobs’, or even the world. I’m like, dude, that might happen, but idiots delegating important decisions (i.e., decisions that should DEFINITELY be taken by humans), to the AS is something that’s a) at least as ominous b) at least as relevant and c) already happening

Dekkia@this.doesnotcut.it on 19 Jul 14:49 next collapse

“I was ready to paint the walls with Sam Altman’s f*cking brain.”

While I absolutely wouldn’t wish this upon him or anyone else, it wouldn’t take me long to make jokes about Frankenstein getting killed by his own monster.

thisbenzingring@lemmy.sdf.org on 19 Jul 14:52 next collapse

Frankenstein had it coming, dude was a complete prick

the monster only wanted his maker to love him, and the monster only killed the people that Frankenstein loved as retribution

msage@programming.dev on 19 Jul 19:27 collapse

Altman is a megalomanic psychopath, lying to steal even more money and break everything just to feel better about himself.

Deflated0ne@lemmy.world on 19 Jul 14:55 collapse

I’ll do it for you. Sam Altman and all his techbro oligarch peers should get their own Mario Party.

altphoto@lemmy.today on 19 Jul 14:51 next collapse

I tried reverse engineering a projector. Now I can only communicate in UART at work.

riskable@programming.dev on 19 Jul 14:59 collapse

I know a guy that can’t speak anymore. He only says, “MIPI!”

fubo@lemmy.world on 19 Jul 15:01 next collapse

Here’s the original article of which this link is a ripoff.

BaroqueInMind@piefed.social on 19 Jul 20:30 collapse

I must say this again: this article presents zero evidence and sounds as if it were written as a dogshit gossip article spread through a throwaway magazine at grocery store rack.

bestonecrazy@lemmy.zip on 19 Jul 16:11 next collapse

People have felt this before. However, it was not with ChatGPT, but with Eliza. This phenomenon is based on the ELIZA effect. Eliza was a chatbot that was meant to simulate Rogerian therapy(Eliza was advanced at the time, but not viewed as such today).

paraphrand@lemmy.world on 19 Jul 16:48 collapse

That seems like a stretch unless you can cite something that shows people spun out as a result of using Eliza.

Yes, people felt a sense of Eliza being intelligent but that only went so deep. And yes, it’s very fair to call it advanced for its time. It was really clever.

But I don’t think it lead to shattering anyone’s world view or caused anyone to psychologically spin out.

It’s relevant in the context of giving a history of chatbots. But not in the history of computers making people “go crazy.” IMO

GreenKnight23@lemmy.world on 19 Jul 16:14 next collapse

interesting. I wonder if this is why internet subcultures like reddit, facebook, twitter, lemmy, etc didn’t start out as toxic but became or are becoming toxic communities that provides incentives to act on psychopathic ideologies.

user engagement is important on any social platform, but who benefits the most is debatable. this is why trolls are so successful. they directly benefit from interactions and gain a sense of superiority by controlling the narrative.

on the other side, bot accounts use trolling techniques to strengthen or weaken social opinion on a grand scale based on what has been requested. The use of AI only improves the efficacy of the end result.

I wonder if the internet today would be more similar to what the internet was like before social media was a thing if “bots” never existed.

very_well_lost@lemmy.world on 19 Jul 18:14 collapse

I doubt it. The bots amplified natural human tendencies by automating bad behavior at a vast scale, but all of that stuff was already there before the bots hit the scene. Maybe they’ve accelerated the decline, but they definitely didn’t cause it.

GreenKnight23@lemmy.world on 19 Jul 19:35 collapse

I mean yeah but no. is an addict and addict if they aren’t addicted to something? technically yes, socially no.

it’s not the fault of society for falling prey to the manipulations, but fault can be found for allowing it to continue to happen.

sugar_in_your_tea@sh.itjust.works on 19 Jul 17:01 next collapse

I wonder if this represents an increase, or if people already susceptible are just moving to LLMs from forums or wherever else they were getting their confirmation bias.

medem@lemmy.wtf on 19 Jul 20:46 collapse

Put even another way, correlation is not causality. Even IF everything were true, the most interesting/relevant information is missing: does AS cause these behaviours? Or does it simply act as a catalyzer?

RickyRigatoni@retrolemmy.com on 19 Jul 17:02 next collapse

This is why I tell my local LLM to be mean to me and treat me like the idiot I am I love her

BaroqueInMind@piefed.social on 19 Jul 17:31 next collapse

The article presents zero evidence and sounds as if it were written as a dogshit gossip article spread through a throwaway magazine at grocery store racks claiming aliens impregnated a man.

ScrooLewse@lemmy.myserv.one on 19 Jul 20:16 next collapse

Yeah, it produces a couple of salient points about AI and mental health, but then it feels the need bookend them with these lurid tales of sudden madness. Honestly when you have dudes leaving their wives and kids for chat bots out in the real world, you really don’t need to spin yarns of deific delusions. Or at least you should back them up with a source.

MiddleAgesModem@lemmy.world on 19 Jul 20:24 next collapse

People are cashing in on anti-AI hysteria. I’ve seen people claim that the goal of these things is specifically to create new mental illnesses.

iopq@lemmy.world on 20 Jul 06:08 next collapse

Probably written by ChatGPT

surewhynotlem@lemmy.world on 20 Jul 15:05 next collapse

aliens impregnated a man.

But at least that’s believable. A male-presenting person with a vagina has sex with a man and doesn’t want to admit to the situation because it seems gay and that is shunned at the time.

dovah@lemmy.world on 20 Jul 15:12 collapse

And every article on this cites the same futurism.com article which provides no real evidence. Totally unreliable.

xodoh74984@lemmy.world on 19 Jul 17:33 next collapse

I’ve always had an issue with calling any of this AI. The branding is part of the problem. These people probably don’t realize that they’re talking to a fancy word predictor tuned to stroke their egos for engagement.

MiddleAgesModem@lemmy.world on 19 Jul 20:28 collapse

Yeah, “AI” is a super vague term. Video games have had “AI” for decades.

jjjalljs@ttrpg.network on 19 Jul 17:42 next collapse

These big companies have blood on their hands and it seems like no one is willing to do anything about it.

MiddleAgesModem@lemmy.world on 19 Jul 20:28 collapse

No, they don’t. No more than automobile companies have blood on their hands for 35,000 Americans that die in car crashes every year.

jjjalljs@ttrpg.network on 19 Jul 20:46 collapse

Automobile companies should be held accountable for destroying and lobbying against other modes of transit, so not really the best metaphor. Also destroying the environment is pretty bad.

Also there’s no cosmic law that says tech companies had to make LLMs and put them everywhere. They’re not even consistently useful.

mrgoosmoos@lemmy.ca on 19 Jul 19:00 next collapse

what the fuck

[deleted] on 19 Jul 21:45 next collapse

.

minoscopede@lemmy.world on 19 Jul 21:53 next collapse

How many people? What percentage of users?

echodot@feddit.uk on 20 Jul 14:27 next collapse

I am not really convinced that otherwise mentally healthy people have a breakdown because of AI. People already teeting on the edge of a mental crisis sure, but pretty much anything could have pushed them over the edge.

Normally it’s Facebook so I guess this is a nice change

[deleted] on 20 Jul 14:42 next collapse

.

ChickenAndRice@sh.itjust.works on 21 Jul 01:49 collapse

Is futurism usually this trash?

Alloi@lemmy.world on 21 Jul 14:25 collapse

tested it myself recently with “tarot readings”, just to see if this has any merit.

it literally told me my future lies in violent rebellion (although it circumvented using that language by explaining it in alternate ways) and that im some sort of “messiah with the fire of humanity’s rebellion” in my heart, direct quote. so yea… its case by case… but i can see why people are saying this.

isolated people talking to an AI that feeds on engagement, it will tell you everything it thinks you want to hear while passing itself off as your only ride or die best friend. “with you till the end” direct quote

im obviously not a messiah, but chatGPT wants me to think im special so i pay for the subscription. regardless of the real world ramifications.

this of course is purely anecdotal, if you are using it for recipes or workout plans, go nuts. but do not use it to find your “lifes purpose” or as a therapist, or some kind of “mystic seer”… it is not your friend, and i can totally see why people who engage with it long enough and in the wrong ways are losing their shit.

ive been purposefully creating false profiles of myself to see where this thing takes certain people. and its…not the best for mental health, to say the least.

it literally showed me links for CIA documents for creating IED devices for in field agents just after this by the way.

of course under the pretext that its “purely for education and research”

im sure im on a list at openAI but why the fuck do they think its okay to let it run amuck like this and fuck with peoples heads? the obvious answer is short term profit. but we are destined to die from climate change and the fallout that comes with it, so i guess whatever allows them to build an AI automated army to guard their bunkers as fast as possible, while lulling us into a new age psychosis and numbing the rest with cosntant stimulation is the answer they were looking for when it comes to surviving the sinking ship of humanity.

nothing we can do about it though, except enjoy the time we have left. it will be decades yet before it gets really bad. so i suggest travelling, maybe doing things you always wanted to do. live life to the fullest while you can.

we are amongst the last generations to live at the peak of humanity before the collapse. in a way, we are the luckiest creatures in existence to experience the pinnacle of civilisation. take advantage of that while you can.

ChickenAndRice@sh.itjust.works on 21 Jul 14:47 collapse

Sorry if these tests had some kind of adverse affect on your mental health. You saw what LLM’s can do in the worst case, so it’s probably best to stop testing now.

I do use Chatgpt sometimes, but only as a glorified search engine. Why? It’s my response to the modern web becoming overly difficult to use (SEO gaming, advertisements that can’t be blocked, paywalls, cookie messages, unfriendly or unresponsive forum posters, massive website rewrites that break links, etc.). I tell it to provide links, so that I can read the sources it’s pulling from especially when I’m skeptic. In other words, my use case doesn’t fit the futurism article at all, so I have no personal experience with it.

So as for the futurism article, since I have no personal experience on the subject then I want them to provide hard evidence. This excludes links to their other articles.

If they can provide hard evidence (and thus create stronger articles on the subject), then it’s a win-win:

  1. they have more credibility in their claims
  2. OpenAI (and other LLM companies) are at least an inch closer to being held accountable for taking advantage of vulnerable people.

Hope I made sense here.