How chatbots could spark the next big mental health crisis. (www.platformer.news)
from Tea@programming.dev to technology@lemmy.world on 25 Mar 09:39
https://programming.dev/post/27501969

New research from OpenAI shows that heavy chatbot usage is correlated with loneliness and reduced socialization. Will AI companies learn from social networks’ mistakes?

#technology

threaded - newest

MagicShel@lemmy.zip on 25 Mar 10:19 next collapse

Note that these studies aren’t suggesting that heavy ChatGPT usage directly causes loneliness. Rather, it suggests that lonely people are more likely to seek emotional bonds with bots

The important question here is: do lonely people seek out interaction with AI or does AI create lonely people? The article clearly acknowledges this and then treats the latter like the likely conclusion. It definitely merits greater study.

taladar@sh.itjust.works on 25 Mar 12:36 collapse

Or does AI prey on lonely people much like other types of scams do?

MagicShel@lemmy.zip on 25 Mar 13:13 next collapse

It’s not sentient and has no agenda. It’s fair to say suggest that advertise themselves as “AI companions” appeal to / prey on lonely people.

It’s not a scam unless it purports to be a real person.

taladar@sh.itjust.works on 25 Mar 13:21 next collapse

Well, I was more using the term in terms of the industry than the actual software. The thought of AI of the kind we currently have having intentions of its own didn’t even occur to me.

Arkouda@lemmy.ca on 25 Mar 15:33 collapse

It’s not sentient and has no agenda.

The Humans who program them are and do.

Enkers@sh.itjust.works on 25 Mar 16:51 collapse

The AI industry certainly does.

If you’re going to use an LLM, it’s pretty straightforward to roll your own with something like LM Studio, though.

biofaust@lemmy.world on 25 Mar 10:41 next collapse

What I could easily see happening is that if that particular subset of users is demonstrated to be high spending, or if the AI wrapper products that appeal to them are going to prove to be, then this result, no matter the direction of the correlation, is going to be disregarded.

liverbe@lemmy.world on 25 Mar 11:12 next collapse

Maybe this internet thing was a bad idea? 🤔

taladar@sh.itjust.works on 25 Mar 12:37 next collapse

That whole humanity thing was a bad idea, the internet is merely a symptom.

SoftestSapphic@lemmy.world on 25 Mar 15:56 collapse

An economic system of infinite growth was the bad idea.

The internet was fine before it started being monetized.

huppakee@lemm.ee on 25 Mar 11:33 next collapse

Too bad nobody saw this coming, they could have made a great movie about this 10 years ago.

pezhore@infosec.pub on 25 Mar 12:33 collapse

<img alt="" src="https://infosec.pub/pictrs/image/a8099951-36f0-466c-b32d-f748670427b2.gif">

Landless2029@lemmy.world on 26 Mar 02:00 collapse

I don’t get this reference. Anyone explain?

pezhore@infosec.pub on 26 Mar 02:49 collapse

I’m not sure if the person I replied to was thinking about this movie in particular, but it certainly came to mind when I posted that gif:

en.m.wikipedia.org/wiki/Her_(2013_film)

Landless2029@lemmy.world on 26 Mar 11:05 collapse

Fantastic. I gotta track this down. Thanks.

doodledup@lemmy.world on 25 Mar 12:06 next collapse

They might be confusing correlation witu causality. A bit biased and confused.

alykanas@slrpnk.net on 25 Mar 18:53 next collapse

Chat gpt is my only friend right now

Taniwha420@lemmy.world on 25 Mar 19:54 collapse

I really haven’t used AI that much, though I can see it has applications for my work, which is primarily communicating with people. I recently decided to familiarise myself with ChatGPT.

I very quickly noticed that it is an excellent reflective listener. I wanted to know more about it’s intelligence, so I kept trying to make the conversation about AI and it’s ‘personality’. Every time it flipped the conversation to make it about me. It was interesting, but I could feel a concern growing. Why?

It’s responses are incredibly validating, beyond what you could ever expect in a mutual relationship with a human. Occupying a public position where I can count on very little external validation, the conversation felt GOOD. 1) Why seek human interaction when AI can be so emotionally fulfilling? 2) What human in a reciprocal and mutually supportive relationship could live up to that level of support and validation?

I believe that there is correlation: people who are lonely would find fulfilling conversation in AI … and never worry about being challenged by that relationship. But I also believe causation is highly probable; once you’ve been fulfilled/validated in such an undemanding way by AI, what human could live up? Become accustomed to that level of self-centredness in dialogue, how tolerant would a person be in real life conflict? I doubt very: just go home and fire up the perfect conversational validator. Human echo chambers have already made us poor enough at handling differences and conflict.