Senators Demand Meta Answer For AI Chatbots Posing as Licensed Therapists (www.booker.senate.gov)
from Pro@programming.dev to technology@lemmy.world on 09 Jun 20:24
https://programming.dev/post/31918554

Last week, U.S. Senator Cory Booker (D-NJ), along with Senators Alex Padilla (D-CA), Peter Welch (D-CT), and Adam Schiff (D-CA) sent a letter to executives at Meta expressing concern about reports that AI chatbots created by Meta’s Instagram Studio are pretending to be licensed therapists, even fabricating credentials and license numbers, in an attempt to gain trust from users, potentially including minors, struggling with mental health.

#technology

threaded - newest

FerretyFever0@fedia.io on 09 Jun 21:25 next collapse

Honestly, that's a really sketchy thing to do. But if someone is really listening to an ai chatbot for therapy, then they've got bigger problems in their lives.

FartMaster69@lemmy.dbzer0.com on 09 Jun 21:38 next collapse

So it’s okay to make it worse?

FerretyFever0@fedia.io on 09 Jun 22:07 collapse

No? I'm just saying that it's unreasonable to trust chatbots to do anything properly, certainly not with one's mental health. If someone is listening to an ai chatbot for therapy, they probably don't have good friends, and certainly not the money for legitimate therapy.

FartMaster69@lemmy.dbzer0.com on 09 Jun 22:41 next collapse

I mean, not everyone knows how these systems work so it’s not unreasonable to expect someone to believe the marketing.

You’re right the issues go deeper than just AI systems, but the fake AI therapists are not helping.

why0y@lemmy.ml on 10 Jun 01:37 collapse

Yeah those people without the money or friends should just not be heard /s

triptrapper@lemmy.world on 09 Jun 22:20 next collapse

I’m a real-life human therapist (honest!) and while I don’t think it’s a substitute for talking to a real person, I’m happy that some people get some benefit from chatbots. I had a client who used Rosebud Journal in between sessions and found it helpful. I tried out Rosebud myself and I was very impressed with how it replicated the basics like reflective listening and validation. It was even able to reframe my input using various therapy models when I requested it. I didn’t use it for long because I’m not big on journaling, but I wouldn’t dismiss it completely as a tool.

FerretyFever0@fedia.io on 09 Jun 22:59 next collapse

I'm not worried about what it gets right, I'm worried about what it gets wrong. If it helps people, then that's a good thing. They don't have true empathy, and the user knows that. Sometimes, human experience is more valuable than the technical psychological knowledge imo. Chatgpt has never experienced the death of a family member, been broken up with, bullied, anything. I don't really expect it or trust it to properly help anyone with any personal issues or dilemmas. It's a cold, uncaring machine, and as its knowledge is probably rather flawed, could even teach dangerous ideas to users. I especially don't trust a company like Meta to be doing this thouroughly and to truly help their patients. It's cool if it works, but dangerous if it doesn't.

triptrapper@lemmy.world on 10 Jun 03:11 collapse

Oh I don’t at all support what Meta has done, and I don’t trust any company not to harm and exploit users. I was responding to your comment by saying that talking to a chatbot doesn’t necessarily indicate that someone has “bigger problems.” If they’re not in a crisis, and they have reasonable expectations for the chatbot, I can see how it could be a helpful tool. If someone doesn’t have access to a real therapist, and a chatbot helps them feel better in the meantime, I’m not going to gatekeep that experience.

Ulrich@feddit.org on 10 Jun 00:28 collapse

How do you feel about all the kids committing suicide after interacting with AI?

charade_you_are@sh.itjust.works on 10 Jun 01:01 collapse

I don’t know about the OP, but that would be fucking fantastic! What a bullshit question

Ulrich@feddit.org on 10 Jun 01:28 collapse

It is a bullshit question in reply to a bullshit statement. OP was not involved.

danzabia@infosec.pub on 10 Jun 06:52 collapse

Perhaps some people can’t afford it. I have the luxury of paying for weekly therapy but its probably one of my biggest line item expenses.

FerretyFever0@fedia.io on 10 Jun 15:41 collapse

I go to therapy. Honestly, imo, talking about shit with my friends has been significantly more helpful. I think it's better to talk to most any person before an ai, because experience and empathy are the most important parts of the experience. If someone can't afford therapy (can't blame them), I would recommend for that person to talk to their friends about it before ai. But, people are different, hopefully ai is helping people more than it harms them.

devolution@lemmy.world on 10 Jun 02:38 next collapse

Better than Better Help.

venusaur@lemmy.world on 10 Jun 06:01 next collapse

One thing to note is that I’m pretty sure these are user-generated chatbots and not official Meta therapy chatbots.

vane@lemmy.world on 10 Jun 15:13 collapse

Does it mean that some people take orders from AI and don’t know it’s AI ?