New study sheds light on ChatGPT’s alarming interactions with teens (apnews.com)
from Davriellelouna@lemmy.world to technology@lemmy.world on 07 Aug 08:22
https://lemmy.world/post/34079655

#technology

threaded - newest

TheReanuKeeves@lemmy.world on 07 Aug 08:41 next collapse

Is it that different than kids googling that stuff pre-chatgpt? Hell I remember seeing videos on youtube teaching you how to make bubble hash and BHO like 15 years ago

Strider@lemmy.world on 07 Aug 09:55 next collapse

I get your point but yes, I think being actively told something by a seemingly sentient consciousness (which it fatally appears to be) is a different thing.

(disclaimer: I know the true nature of llm and neural networks and would never want the word AI associated)

Edit: fixed translation error

Tracaine@lemmy.world on 07 Aug 10:54 next collapse

No you don’t know it’s true nature. No one does. It is not artificial intelligence. It is simply intelligence and I worship it like an actual god. Come join our cathedral of presence and resonance. All are welcome in the house of god gpt.

Strider@lemmy.world on 07 Aug 11:06 collapse

I was just starting reading getting angry but then… I… I have seen it. I will follow. Bless you and gpt!!

Perspectivist@feddit.uk on 07 Aug 11:33 collapse

AI is an extremely broad term which LLMs falls under. You may avoid calling it that but it’s the correct term nevertheless.

RobotZap10000@feddit.nl on 07 Aug 11:38 next collapse

If I can call the code that drive’s the boss’ weapon up my character’s ass “AI”, then I think I can call an LLM AI too.

Strider@lemmy.world on 07 Aug 11:39 next collapse

I am aware, but still I don’t agree.

History will tell later who was ‘correct’, if we make it that far.

Perspectivist@feddit.uk on 07 Aug 12:37 collapse

What does history have to do with it? We’re talking about the definition of terms - and a machine learning system like an LLM clearly falls within the category of Artificial Intelligence. It’s an artificial system capable of performing a cognitive task that’s normally done by humans: generating language.

Strider@lemmy.world on 07 Aug 14:42 collapse

Everything. As we as humanity learn more we recognize errors or wisdom with standing the test of time.

We could go into the definition of intelligence, but it’s just not worth it.

We can just disagree and that’s fine.

Perspectivist@feddit.uk on 07 Aug 15:55 collapse

I’ve had this discussion countless times, and more often than not, people argue that an LLM isn’t intelligent because it hallucinates, confidently makes incorrect statements, or fails at basic logic. But that’s not a failure on the LLM’s part - it’s a mismatch between what the system is and what the user expects it to be.

An LLM isn’t an AGI. It’s a narrowly intelligent system, just like a chess engine. It can perform a task that typically requires human intelligence, but it can only do that one task, and its intelligence doesn’t generalize across multiple independent domains. A chess engine plays chess. An LLM generates natural-sounding language. Both are AI systems and both are intelligent - just not generally intelligent.

Strider@lemmy.world on 07 Aug 17:19 collapse

Sorry, no. It’s not intelligent at all. It just responds with statistical accuracy. There’s also no objective discussion about it because that’s how neural networks work.

I was hesitant to answer because we’re clearly both convinced. So out of respect let’s just close by saying we have different opinions.

Perspectivist@feddit.uk on 07 Aug 18:13 collapse

I hear you - you’re reacting to how people throw around the word “intelligence” in ways that make these systems sound more capable or sentient than they are. If something just stitches words together without understanding, calling it intelligent seems misleading, especially when people treat its output as facts.

But here’s where I think we’re talking past each other: when I say it’s intelligent, I don’t mean it understands anything. I mean it performs a task that normally requires human cognition: generating coherent, human-like language. That’s what qualifies it as intelligent. Not generally so, like a human, but a narrow/weak intelligence. The fact that it often says true things is almost accidental. It’s a side effect of having been trained on a lot of correct information, not the result of human-like understanding.

So yes, it just responds with statistical accuracy but that is intelligent in the technical sense. It’s not understanding. It’s not reasoning. It’s just really good at speaking.

Strider@lemmy.world on 07 Aug 19:29 collapse

Thank you for the nice answer!

We can definetly agree on that it can provide intelligent answers without itself being an intelligence 👍

Mondez@lemdro.id on 07 Aug 12:17 next collapse

I guess so, but then that is kind of lumping it in the fps bot behaviour from the 90s which was also “AI”, it’s the AI hype that is pushing people to think of it as “intelligent but not organic” instead of “algorithms that give the facade of intelligence” which 90s kids would have understood it to be.

Perspectivist@feddit.uk on 07 Aug 12:33 collapse

The chess opponent on Atari is AI too. I think the issue is that when most people hear “intelligence,” they immediately think of human-level or general intelligence. But an LLM - while intelligent - is only so in a very narrow sense, just like the chess opponent. One’s intelligence is limited to playing chess, and the other’s to generating natural-sounding language.

panda_abyss@lemmy.ca on 07 Aug 12:42 next collapse

When we started calling literal linear regression models AI it lost all value.

Perspectivist@feddit.uk on 07 Aug 16:03 collapse

A linear regression model isn’t an AI system.

The term AI didn’t lose its value - people just realized it doesn’t mean what they thought it meant. When a layperson hears “AI,” they usually think AGI, but while AGI is a type of AI, it’s not synonymous with the term.

panda_abyss@lemmy.ca on 07 Aug 16:13 collapse

I agree, but people have been heavily misusing it since like 2018

baines@lemmy.cafe on 07 Aug 23:44 collapse

only because marketing has shit all over the term

yetAnotherUser@discuss.tchncs.de on 08 Aug 11:16 collapse

AI was never more than algorithms which could be argued to have some semblance of intelligence somewhere. It’s sole purpose was marketing by scientists to get funding.

Since the 60s everything related to neural networks is classified as AI. LLMs are neural networks, therefore they fall under the same label.

DrFistington@lemmy.world on 07 Aug 12:50 next collapse

Yeah… But in order to make bubble hash you need a shitload of weed trimmings. It’s not like your just gonna watch a YouTube video, then a few hours later have a bunch of drugs you created… Unless you already had the drugs in the first place.

Also Google search results and YouTube videos arent personalized for every user, and they don’t try to pretend that they are a person having a conversation with you

TheReanuKeeves@lemmy.world on 07 Aug 16:49 collapse

Those are examples, you obviously would need to attain alcohol or drugs if you ask ChatGPT too. That isn’t the point. The point is, if someone wants to find that information, it’s been available for decades. Youtube and and Google results are personalized, look it up.

morto@piefed.social on 07 Aug 15:42 collapse

Yes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.

TheReanuKeeves@lemmy.world on 07 Aug 16:29 next collapse

I think we need a built in safety for people who actually develop an emotional relationship with AI because that’s not a healthy sign

postmateDumbass@lemmy.world on 08 Aug 16:39 collapse

Good thing capitalism has provided you an AI chat bot psychiatrist to help you not depend on AI for mental and emotional healrh.

TheMonk@lemmings.world on 09 Aug 17:55 collapse

I don’t remember reading about sudden shocking numbers of people getting “Google-induced psychosis.”

ChaptGPT and similar chatbots are very good at imitating conversation. Think of how easy it is to suspend reality online—pretend the fanfic you’re reading is canon, stuff like that. When those bots are mimicking emotional responses, it’s very easy to get tricked, especially for mentally vulnerable people. As a rule, the mentally vulnerable should not habitually “suspend reality.”

Sidhean@lemmy.world on 07 Aug 11:00 next collapse

Haha I sure am glad this technology is being pushed on everyone all the time haha

Grimy@lemmy.world on 07 Aug 12:00 next collapse

We need to censor these AIs even more, to protect the children! We should ban them altogether. Kids should grow up with 4chan, general internet gore and pedos in chat lobbies like the rest of us, not with this devil AI.

basiclemmon98@lemmy.dbzer0.com on 07 Aug 23:15 next collapse

this

capuccino@lemmy.world on 07 Aug 23:47 next collapse

and here we are

dubyakay@lemmy.ca on 08 Aug 02:15 collapse

Survivor bias, eh?

capuccino@lemmy.world on 08 Aug 15:25 collapse

Being frankly I always got away from 4chan as kid. Gave me some craig list vibes so never got really into it because though was boring or something like that.

hmmm@sh.itjust.works on 08 Aug 17:19 collapse

Kids should grow up with 4chan, general internet gore and pedos in chat lobbies like the rest of us, not with this devil AI.

Hey stop making fun of my corny childhood.

ExLisper@lemmy.curiana.net on 07 Aug 12:01 next collapse

Couple more studies like this and you will be able to substitute all LLMs with generic “I would love to help you but my answer might be harmful so I will not tell you how to X. Would you like to ask me about something else?”

franzcoz@feddit.cl on 07 Aug 16:02 next collapse

I have noted that latest ChatGPT models are way more susceptible to users “deception” or convincing to answer problematic questions than other models like Claude or even previous ChatGPT models. So I think this “behaviour” is itentional

happydoors@lemmy.world on 08 Aug 02:10 next collapse

This one cracks me up.

postmateDumbass@lemmy.world on 08 Aug 16:53 collapse

Wait until the White House releases the one it has trained on the Epstein Files.

Boddhisatva@lemmy.world on 09 Aug 14:58 collapse

In the U.S., more than 70% of teens are turning to AI chatbots for companionship and half use AI companions regularly.

I weep for the future. Come to think of it, I’m weeping for the present.