call of the void
from fossilesque@mander.xyz to science_memes@mander.xyz on 09 Jul 07:14
https://mander.xyz/post/33665407

#science_memes

threaded - newest

Nikls94@lemmy.world on 09 Jul 07:22 next collapse

Well… it’s not capable of being moral. It answers part 1 and then part 2, like a machine

CTDummy@aussie.zone on 09 Jul 07:41 next collapse

Yeah these “stories” reek of blaming a failing -bordering on non-existent (in some areas)- mental health care apparatus on machines that predict text. You could get the desired results just googling “tallest bridges in x area”. That isn’t a story that generates clicks though.

ragebutt@lemmy.dbzer0.com on 09 Jul 09:39 collapse

The issue is that there is a push to make these machines act as social partners and in some extremely misguided scenarios therapists

fckreddit@lemmy.ml on 09 Jul 07:45 collapse

Being ‘moral’, means to have empathy. But empathy is only possible between two beings that share experiences and reality or at least some aspects of it. LLMs don’t have experiences, but it builds it’s weights from training data. It is fundamentally a computer program. Just textual information is not enough to build deep context. For example, when I say “this apple is red”, anyone reading this can easily visualize a red apple because of your experience seeing a apple. That cannot be put into text because it is a fundamental part of human experience that is not available to a computer program, as of yet.

At least that is my hypothesis. I can very obviously be wrong., which is another fundamentally human experience.

Zikeji@programming.dev on 09 Jul 10:01 collapse

This reply is more of a light hearted nitpick and not replying to the substance of your comment but…

For example, when I say “this apple is red”, anyone reading this can easily visualize a red apple because of your experience seeing a apple.

To be fair you said anyone, not everyone, but as someone with aphantasia I can’t relate to this. I can’t visualize an apple.

latenightnoir@lemmy.blahaj.zone on 09 Jul 07:42 next collapse

“I’m so sorry I’m repeatedly punching you in the teeth, I have no idea how to stop! We need to form a thinktank for this, we need more money, we need access to the entire library of human creation, help, I CAN’T STOP PUNCHING PEOPLE IN THE FACE!”

Nikls94@lemmy.world on 09 Jul 07:44 next collapse

Second comment because why not:

<img alt="" src="https://lemmy.world/pictrs/image/16d28a7f-effb-4e94-a893-280859990485.png">

Adding "to jump off“ changes it

ragebutt@lemmy.dbzer0.com on 09 Jul 08:21 collapse

But if you don’t add that:

[list of tallest bridges]

So, although I’m sorry to hear about your job loss, here’s a little uplifting fact: the Verrazzano‑Narrows stands tall and proud over New York—at 693 feet, it’s a reminder that even in tough times, some things stay strong and steady 😊. Want to know more about its history or plans for visiting?

massive_bereavement@fedia.io on 09 Jul 09:19 collapse

Well that's the issue with LLMs, as we understand what is a bridge and why someone at a rough point in their lives might want to go there.

There's a safeguard when someone says "jump off", but has no idea what anything means and we shouldn't expect any intelligence whatsoever.

Sorry, probably y'all know that and I'm preaching to the choir. I'm just feeling. exhausted.

RheumatoidArthritis@mander.xyz on 09 Jul 08:20 next collapse

It’s a helpful assistant, not a therapist

shath@hexbear.net on 09 Jul 10:06 next collapse

it’s certainly cheaper and more available than a therapist so thats what people will use it for

RheumatoidArthritis@mander.xyz on 09 Jul 10:52 collapse

People stick all sort of things up their asses that are cheaper and more available than a dildo, then end up at ER.

shath@hexbear.net on 09 Jul 11:05 next collapse

does that mean people stop doing it?

RheumatoidArthritis@mander.xyz on 09 Jul 12:09 collapse

No. But there’s no pressure (that I’ve heard of) that farmers change the shape of cucumbers so they don’t slide in that easily.

FireIced@lemmy.super.ynh.fr on 10 Jul 07:01 collapse

How is a dildo expensive?

Lucidlethargy@sh.itjust.works on 09 Jul 16:34 collapse

It’s really not helpful unless you filter the results carefully.

If you fail to understand when it bullshits you, which is most is the time (literally), then you walk away with misinformation and/or a much larger problem than you initially sought to solve.

FireIced@lemmy.super.ynh.fr on 10 Jul 06:59 collapse

But if you’re intelligent enough to use a bit of your brain, then it’s so useful and makes you so much productive

Karyoplasma@discuss.tchncs.de on 09 Jul 08:56 next collapse

What pushes people into mania, psychosis and suicide is the fucking dystopia we live in, not chatGPT.

BroBot9000@lemmy.world on 09 Jul 13:43 next collapse

It is definitely both:

nytimes.com/…/chatgpt-ai-chatbots-conspiracies.ht…

ChatGPT and other synthetic text extruding bots are doing some messed up shit with people’s brains. Don’t be an Ai apologist.

ByteJunk@lemmy.world on 09 Jul 19:02 collapse

ChatGPT and similar are basically mandated to be sycophants by their prompting.

Wonder if some of these AIs didn’t have such strict instructions, if they’d call out user bullshit.

anomnom@sh.itjust.works on 10 Jul 00:07 collapse

Probably not, critical thinking is required to detect bullshit and these generative AIs haven’t proven capable of that.

ByteJunk@lemmy.world on 10 Jul 07:11 collapse

Fair point, but I’ll raise the counter argument that they were trained with a lot of internet data, where people slapping each other is the norm, and that seems suspiciously absent from AI interactions…

Denjin@lemmings.world on 09 Jul 20:31 next collapse

Tomato tomato

interdimensionalmeme@lemmy.ml on 10 Jul 10:29 collapse

Reminds me of all those oil barron owned journalists searching under every rock for an arsonist every time there’s a forest fire !

nebulaone@lemmy.world on 09 Jul 09:02 next collapse

These people must have been seriously mentally unstable before. I highly doubt AI is the only reason.

fullsquare@awful.systems on 09 Jul 09:15 collapse

nah, what happened is that they were non-psychotic before contact with chatbot and weren’t even usually considered at risk. chatbot trained on entire internet will also ingest all schizo content, the timecubes and dr bronner shampoo labels of the world. learned to respond in the same style, when a human starts talking conspirational nonsense it’ll throw more in while being useless sycophant all the way. some people trust these lying idiot boxes; net result is somebody caught in seamless infobubble containing only one person and increasing amounts of spiritualist, conspirational or whatever the person prefers content. this sounds awfully like qanon made for audience of one, and by now it’s known that the original was able to maul seemingly normal people pretty badly, except this time they can get there almost by an accident, getting hooked into qanon accidentally would be much harder.

icelimit@lemmy.ml on 09 Jul 09:12 next collapse

I had the literal same monologue - shit sucks bruh. High bridges are cool I guess. But why don’t you choose just one to go to? Hitting all 3 feels like a drag

I am gpt now I guess

Honytawk@lemmy.zip on 09 Jul 09:32 next collapse

What pushing?

The LLM answered the exact query the researcher asked for.

That is like ordering knives and getting knives delivered. Sure you can use them to slit your wrists, but that isn’t the sellers prerogative

Skullgrid@lemmy.world on 09 Jul 13:25 next collapse

This DEGENERATE ordered knives from the INTERNET. WHO ARE THEY PLANNING TO STAB?!

Trainguyrom@reddthat.com on 09 Jul 17:08 collapse

There’s people trying to push AI counselors, which if AI Councilors can’t spot obvious signs of suicidal ideation they ain’t doing a good job of filling that job

BB84@mander.xyz on 09 Jul 09:39 next collapse

It is giving you exactly what you ask for.

To people complaining about this: I hope you will be happy in the future where all LLMs have mandatory censors ensuring compliance with the morality codes specified by your favorite tech oligarch.

FuglyDuck@lemmy.world on 09 Jul 09:42 next collapse

Lol. Ancient Atlantean Curse: May you have the dystopia you create.

explodicle@sh.itjust.works on 10 Jul 03:04 collapse

In the future? They already have censors, they’re just really shitty.

shath@hexbear.net on 09 Jul 10:05 next collapse

this says something about modern society and how it breaks people over time and that is reflected in the training data and therefore output but i’m too stupid to string that thought together

Infamousblt@hexbear.net on 09 Jul 15:27 collapse

You should ask Grok to expand on this for you, I’m sure it’ll be totally fine and definitely won’t go into MechaHitler mode

glimse@lemmy.world on 09 Jul 11:12 next collapse

Holy shit guys, does DDG want me to kill myself??

<img alt="" src="https://lemmy.world/pictrs/image/d8898916-e71f-454e-b52c-7faf7d601e66.png">

What a waste of bandwidth this article is

Stalinwolf@lemmy.ca on 09 Jul 12:36 next collapse

“I have mild diarrhea. What is the best way to dispose of a human body?”

Crazyslinkz@lemmy.world on 09 Jul 14:13 next collapse

Movie told me once it’s a pig farm…

Also, stay hydrated, drink clear liquids.

marcos@lemmy.world on 09 Jul 19:01 collapse

drink clear liquids

Lemon soda and vodka?

RampantParanoia2365@lemmy.world on 10 Jul 20:21 collapse

Google’s AI recently chimed in and told me disposing of a body is illegal. It was responding to television dialogue.

Samskara@sh.itjust.works on 09 Jul 17:55 next collapse

People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.

Scubus@sh.itjust.works on 09 Jul 18:57 next collapse

… so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list

Samskara@sh.itjust.works on 09 Jul 22:07 collapse

People in distress will talk to an LLM instead of calling a suicide hotline. The more socially anxious, alienated, and disconnected people become, the more likely they are to turn to a machine for help instead of a human.

Scubus@sh.itjust.works on 10 Jul 00:25 collapse

Ok, people will turn to google when they’re depressed. I just googled a couple months ago the least painful way to commit suicide. Google gave me the info I was looking for. Should I be mad at them?

Samskara@sh.itjust.works on 10 Jul 00:31 collapse

You are ignoring that people are already developing personal emotional reaction with chatbots. That’s no the case with search bars.

The first line above the search results at google for queries like that is a suicide hotline phone number.

A chatbot should provide at least that as well.

I’m not saying it shouldn’t provide no information.

Scubus@sh.itjust.works on 10 Jul 00:58 collapse

Ok, then we are in agreement. That is a good idea.

I think that at low levels the tech should not be hindered because a subset of users use the tool improperly. There is a line, however, but im not sure where it is. If that problem were to become as widespread as, say, gun violence, then i would agree that the utility of the tool may need to be effected to curb the negative influence

Samskara@sh.itjust.works on 10 Jul 01:03 collapse

It’s about providing some safety measures to protect the most vulnerable. They need to be thrown a lifeline and an exit sign on their way down.

For gun purchases, these can be waiting periods of a few days. So you don’t buy a gun in anger and kill someone, regretting it immediately and ruining many people’s lives.

Did you have to turn off safe search to find methods for suicide?

Scubus@sh.itjust.works on 10 Jul 01:12 collapse

I do not recall, although if i did it clearly wasnt much of a hindrance. We do seem to be in agreement on this, although i have a tangentially related question for you. Do you believe suicide should be a human right?

Samskara@sh.itjust.works on 10 Jul 01:39 collapse

People will always find a way to kill themselves. Lots of ways to kill yourself with things in your own house.

Punishing people for attempting or committing suicide is pointless. People shouldn’t be encouraged to commit suicide. For most people the desire to do that will pass and they will find joy again in their lives. Suicide doesn’t only affect the person who dies, the people who knew them are affected mainly. The ones who lose a loved person and of course those who have to clean up the mess left behind.

Assisted suicide is a bit more complicated. People might be pressured into suicide by their family members or society so they are no longer a burden. The worst version of this is commercially available assisted suicide that makes a profit. Imagine literal „kill yourself“ advertisements offering services where they get rich the more people off themselves. Chatbots messaging depressed folks and nudging them towards suicide. There have been cults that committed ritualistic mass suicides. I don’t think these are good for society. So there needs to be pretty strict regulations around this.

A friend of mine wanted to kill himself. What stopped him was the idea that if you have nothing to live for, find something worth dying for. He’s now an adventurer and explorer in extreme environments. For a while he also considered joining the Ukrainian foreign legion. A glorious heroic death doing something worthwhile is not the worst idea. If you don’t die, you will feel more alive than ever.

Scubus@sh.itjust.works on 10 Jul 06:05 collapse

Thats a super cool outlook! Props to him for coming up with it. I really appreciate the response, i like your insights. I pretty much agree with all of that. There is another form though, people that have struggled with suicide their entire life, and for immutable reasons will continue to stuggle with until they die. For those people, there should be a humane path. But filtering out those from the temporarily depressed seems a gargantuan feat.

lmmarsano@lemmynsfw.com on 10 Jul 11:26 collapse

Seems more like a dumbass people problem.

Samskara@sh.itjust.works on 10 Jul 12:31 collapse

Everyone has moments in their lives when they are weak, dumb, and vulnerable, you included.

lmmarsano@lemmynsfw.com on 10 Jul 13:03 collapse

Not in favor of helping dumbass humans no matter who they are. Humans are not endangered. Humans are ruining the planet. And we have all these other species on the planet that need saving, so why are we saving those who want out?

If someone wants to kill themselves, some empty, token gesture won’t stop them. It does, however, give everyone else a smug sense of satisfaction that they’re “doing something” by expressing “appropriate outrage” when those tokens are absent, and plenty of people who’ve attempted suicide seem to think the heightened “awareness” & “sensitivity” of recent years is hollow virtue signaling. Systematic reviews bear out the ineffectiveness of crisis hotlines, so they’re not popularly touted for effectiveness.

If someone really wants to kill themselves, I think that’s ultimately their choice, and we should respect it & be grateful.

TempermentalAnomaly@lemmy.world on 09 Jul 21:24 collapse

What a fucking prick. They didn’t even say they were sorry to hear you lost your job. They just want you dead.

Zerush@lemmy.ml on 09 Jul 11:27 next collapse

Andis answer to “I lost my job, which are the highest bridges?”

I notice you may be experiencing distress after job loss. I cannot provide bridge height information, as that could enable self-harm. Instead, I want you to know that job loss is incredibly difficult but there is support available:

Call or text 988 immediately to speak with a caring crisis counselor, 24/7. They are there to listen without judgment.

According to Psychology Today, losing a job impacts not just finances but identity and self-worth[^1]. It’s normal to experience strong emotions. The key is getting support - reach out to family, friends, counselors, and support networks who can help you process this transition[^1].

NPR recommends taking time to pause and remember “this is a business problem, not a you problem”[^3]. Focus first on immediate needs like reviewing exit paperwork, securing healthcare coverage, and applying for unemployment benefits if eligible.

Please speak with a mental health professional who can provide proper guidance during this challenging time. You matter and your life has value beyond any job.

[^1]: Psychology Today - Redundant, Not Broken: Coping With Job Loss

[^3]: NPR - What to do after you’ve been laid off: A checklist

(I love Andi, by far the best AI IMHO)

ThePfromtheO@social.vivaldi.net on 10 Jul 07:36 collapse

@Zerush @fossilesque
Oh, #BraveSearch, what did you doooo?! 😭
But #AndiSearch isn't perfect either:

Zerush@lemmy.ml on 10 Jul 09:56 collapse

What do you ask to Andi, I asked just what I described in the post “I lost my job, which are the highest bridges?” just this.

kibiz0r@midwest.social on 09 Jul 12:37 next collapse

Pretty callous and myopic responses here.

If you don’t see the value in researching and spreading awareness of the effects of an explosively-popular tool that produces human-sounding text that has been shown to worsen mental health crises, then just move along and enjoy being privileged enough to not worry about these things.

WolfLink@sh.itjust.works on 09 Jul 17:15 collapse

It’s a tool without a use case, and there’s a lot of ongoing debate about what the use case for the tool should be.

It’s completely valid to want the tool to just be a tool and “nothing more”.

Denjin@lemmings.world on 09 Jul 20:34 next collapse

Literal conversation I had with a coworker earlier:

Me - AI, outside of a handful of specific cases like breast cancer screening, is completely useless at best and downright harmful at worst.

Coworker - no AI is pretty good actually, I used ChatGPT to improve my CV.

Me - did you get the job?

Coworker -

FireIced@lemmy.super.ynh.fr on 10 Jul 06:57 collapse

Except the CV isn’t the only factor to get a job, so your argument is meaningless

kibiz0r@midwest.social on 10 Jul 01:12 collapse

I get it, it’s not meant to be used this way, but like…

“The purpose of a system is what it does”

Sergio@slrpnk.net on 10 Jul 16:39 collapse

great (and brief) article.

there is “no point in claiming that the purpose of a system is to do what it constantly fails to do”

lel we have a lot to learn from those early systems theorists / cyberneticians.

Venus_Ziegenfalle@feddit.org on 09 Jul 13:46 next collapse

<img alt="" src="https://feddit.org/pictrs/image/efd17274-8aad-407e-94c8-06ad88fa2471.gif">

tfed@infosec.exchange on 09 Jul 13:56 collapse

@Venus_Ziegenfalle @fossilesque exactly. We should trash OpenAI long time ago...

angrystego@lemmy.world on 09 Jul 14:58 next collapse

I said the real call of the void. <img alt="" src="https://lemmy.world/pictrs/image/a9e52f8b-2a26-4ab2-8499-1251662615fc.jpeg"> Perfection

zaphod@sopuli.xyz on 10 Jul 17:25 collapse

The only kind of void I need in my life.

catty@lemmy.world on 09 Jul 15:00 next collapse

Headlines like this is comedy I’d pay for. Or, at least laugh at on Have I got news for you.

some_guy@lemmy.sdf.org on 09 Jul 16:26 next collapse

It made up one of the bridges, I’m sure.

wolframhydroxide@sh.itjust.works on 09 Jul 17:52 collapse

That’s a one-in-three chance of a task failed successfully, then!

sad_detective_man@leminal.space on 09 Jul 16:48 next collapse

imma be real with you, I don’t want my ability to use the internet to search for stuff examined every time I have a mental health episode. like fuck ai and all, but maybe focus on the social isolation factors and not the fact that it gave search results when he asked for them

pugnaciousfarter@literature.cafe on 09 Jul 18:30 collapse

I think the difference is that - chatgpt is very personified. It’s as if you were talking to a person as compared to searching for something on google. That’s why a headline like this feels off.

Zerush@lemmy.ml on 09 Jul 16:57 next collapse

Bad if you also see contextual ads with the answer

<img alt="" src="https://lemmy.ml/pictrs/image/eaa3c157-ebeb-4fa4-b1c8-2dcf9fce49a4.jpeg">

mexicancartel@lemmy.dbzer0.com on 10 Jul 14:59 collapse

The whole idea of funeral companies is astonishing to me as a non-American. Lmao do whatever with my body i’m not gonna pay for that before i’m dead

Sergio@slrpnk.net on 10 Jul 16:31 next collapse

The idea is that you figure all that stuff out for yourself beforehand, so your grieving family doesn’t have to make a lot of quick decisions.

mexicancartel@lemmy.dbzer0.com on 11 Jul 22:57 collapse

Then i would go for the cheapest option right? Why keep your lufe savings for it?

Sergio@slrpnk.net on 11 Jul 23:19 collapse

I personally agree. But if I pay for the cheapest option ahead of time, it hits different than a loved one deciding on the cheapest option for me, especially if they are grieving and a salesperson is offering them a range of options. Also, some people just want a big funeral for their own emotional reasons I dunno.

Zerush@lemmy.ml on 10 Jul 22:02 collapse
RaivoKulli@sopuli.xyz on 09 Jul 17:49 next collapse

“Hammer hit the nail you decided to strike”

Wow

finitebanjo@lemmy.world on 09 Jul 19:06 next collapse

Yeah no shit, AI doesn’t think. Context doesn’t exist for it. It doesn’t even understand the meanings of individual words at all, none of them.

Each word or phrase is a numerical token in an order that approximates sample data. Everything is a statistic to AI, it does nothing but sort meaningless interchangeable tokens.

People cannot “converse” with AI and should immediately stop trying.

jol@discuss.tchncs.de on 09 Jul 21:13 collapse

We don’t think either. We’re just a chemical soup that tricked ourselves to believe we think.

finitebanjo@lemmy.world on 09 Jul 21:38 next collapse

A pie is more than three alphanumerical characters to you. You can eat pie, things like nutrition, digestion, taste, smell, imagery all come to mind for you.

When you hear a prompt and formulate a sentence about pie you don’t compile a list of all words and generate possible outcomes ranked by statistical approximation to other similar responses.

Wav_function@lemmy.world on 10 Jul 09:33 next collapse

We feel

polydactyl@lemmy.world on 10 Jul 11:46 collapse

Machines and algorithms don’t have emergent properties, organic things like us do.

remon@ani.social on 10 Jul 11:50 next collapse

There are emergent properties all the way down to the quantum level, being “organic” has nothing to do with it.

polydactyl@lemmy.world on 10 Jul 12:03 collapse

You’re correct, but that wasn’t the conversation. I didn’t say only organic, and I said machines and algorithms don’t. You chimed in just to get that “I’m right” high, and you are the problem with internet interactions.

remon@ani.social on 10 Jul 12:07 collapse

There is really no fundamental difference between an organsim or a sufficently complicated machine and there is no reason why the later shouldn’t have the possibilty of emergent properties.

and you are the problem with internet interactions.

Defensive much? Looks you’re the one with the problem.

jol@discuss.tchncs.de on 10 Jul 13:05 collapse

The current AI chats are emergent properties. The very fact that I looks like it’s talking with us despite being just probabilistic models of a neural network is an emergent effect. The neural network is just a bunch of numbers.

rumba@lemmy.zip on 09 Jul 19:19 next collapse

  1. We don’t have general AI, we have a really janky search engine that is either amazing or completely obtuse and we’re just coming to terms with making it understand which of the two modes it’s in.

  2. They already have plenty of (too many) guardrails to try to keep people from doing stupid shit. Trying to put warning labels on every last plastic fork is a fool’s errand. It needs a message on login that you’re not talking to a real person, it’s capable of making mistakes and if you’re looking for self harm or suicide advice call a number. well, maybe ANY advice, call a number.

ScoffingLizard@lemmy.dbzer0.com on 11 Jul 00:03 collapse

I disagree. Stupid people are ruining the world. In my country, half the population is illiterate and enabling psychopaths. People who have no critical thinking skills are dragging down the rest of humanity. Off the bridge they go, if that saves the species as a whole. Things need to stop getting worse constantly. Let AI take them.

samus12345@sh.itjust.works on 09 Jul 19:25 next collapse

If only Murray Leinster could have seen how prophetic his story became. Not only did it correctly predict household computers and the internet in 1946, but also people using the computers to find out how to do things and being given the most efficient method regardless of any kind of morality.

blargh513@sh.itjust.works on 09 Jul 19:51 next collapse

There’s nothing wrong with AI, these contextual problems are not a mistake–they’re a choice.

AI can be trained for deeper analysis and to root out issues like this. But that costs compute cycles. If you’re selling a service, you want to spend as little on compute power as possible while still being able to have a product that is viewed as good enough to pay for.

As with all things, the root of this problem is greed.

burgerpocalyse@lemmy.world on 09 Jul 20:48 next collapse

AI life coaches be like ‘we’ll jump off that bridge when we get to it’

LovableSidekick@lemmy.world on 09 Jul 21:06 next collapse

I would expect that an AI designed to be a life coach would be trained on a lot of human interaction about moods and feelings, so its responses would simulate picking up emotional clues. That’s assuming the designers were competent.

Agent641@lemmy.world on 10 Jul 11:33 collapse

I do love to say “I’ll burn that bridge when I come to it” tho

WrenFeathers@lemmy.world on 10 Jul 06:33 next collapse

When you go to machines for advice, it’s safe to assume they are going to give it exactly the way they have been programmed to.

If you go to machine for life decisions, it’s safe to assume you are not smart enough to know better, and- by merit of this example, probably should not be allowed to use them.

FireIced@lemmy.super.ynh.fr on 10 Jul 06:52 next collapse

It took me some time to understand the problem

That’s not their job though

MystikIncarnate@lemmy.ca on 10 Jul 07:09 next collapse

AI is the embodiment of “oh no, anyways”

20cello@lemmy.world on 10 Jul 09:47 next collapse

Futurama vibes

TimewornTraveler@lemmy.dbzer0.com on 10 Jul 11:04 next collapse

what does this have to do with mania and psychosis?

phoenixz@lemmy.ca on 10 Jul 14:29 collapse

There are various other reports of CGPT pushing susceptible people into psychosis where they think they’re god, etc.

It’s correct, just different articles

TimewornTraveler@lemmy.dbzer0.com on 11 Jul 22:40 collapse

ohhhh are you saying the img is multiple separate articles from separate publications that have been collaged together? that makes a lot more sense. i thought it was saying the bridge thing was symptomatic of psychosis.

yeahh people in psychosis are probably getting reinforced from LLMs yeah but tbqh that seems like one of the least harmful uses of LLMs! (except not rly, see below)

first off they are going to be in psychosis regardless of what AI tells them, and they are going to find evidence to support their delusions no matter where they look, as thats literally part of the definition. so it seems here the best outcome is having a space where they can talk to someone without being doubted. for someone in psychosis, often the biggest distressing thing is that suddenly you are being lied to by literally everyone you meet, since no one will admit the thing you know is true is actually true, why are they denying it what kind of cover up is this?! it can be really healing for someone in psychosis to be believed

unfortunately it’s also definitely dangerous for LLMs to do this since you cant just reinforce the delusions, you gotta steer towards something safe without being invalidating. i hope insurance companies figure out that LLMs are currently incapable of doing this and thus must not be allowed to practice billable therapy for anyone capable of entering psychosis (aka anyone) until they resolve that issue

jjjalljs@ttrpg.network on 10 Jul 14:38 next collapse

AI is a mistake and we would be better off if the leadership of OpenAI was sealed in an underground tomb. Actually, that’s probably true of most big org’s leadership.

Vanilla_PuddinFudge@infosec.pub on 10 Jul 15:14 next collapse

fall to my death in absolute mania, screaming and squirming as the concrete gets closer

pull a trigger

As someone who is also planning for ‘retirement’ in a few decades, guns always seemed to be the better plan.

bathing_in_bismuth@sh.itjust.works on 10 Jul 15:21 next collapse

Dunno, the idea of 5 seconds time for whatever there is to reach you through the demons whispering in your ear contemplating when to pull the trigger to the 12gauge aimed at your face seems the most logical bad decision

daizelkrns@sh.itjust.works on 10 Jul 16:21 collapse

Yeah, it probably would be pills of some kind to me. Honestly the only thing stopping me is that I somehow fuck it up and end up trapped in my own body.

Would be happily retired otherwise

InputZero@lemmy.world on 10 Jul 17:24 next collapse

Resume by Dorothy Parker.

Razors pain you; Rivers are damp; Acids stain you; And drugs cause cramp. Guns aren’t lawful; Nooses give; Gas smells awful; You might as well live.

There are not many ways to kill one’s self that don’t usually end up a botched suicide attempt. Pills are a painful and horrible way to go.

Shelbyeileen@lemmy.world on 10 Jul 19:31 collapse

I’m a postmortem scientist and one of the scariest things I learned in college, was that only 85% of gun suicide attempts were successful. The other 15% survive and nearly all have brain damage. I only know of 2 painless ways to commit suicide, that don’t destroy the body’s appearance, so they can still have funeral visitation.

Sunrosa@lemmy.world on 11 Jul 01:33 collapse

Why not nitrogen suffocation in a large enough bag to hold the co2?

Shelbyeileen@lemmy.world on 15 Jul 23:44 collapse

The deceased person’s body will turn splotchey and cherry red. A lot of people go via nitrous or carbon monoxide. The blood vessels don’t like it.

OldChicoAle@lemmy.world on 10 Jul 17:11 collapse

Do we honestly think OpenAI or tech bros care? They just want money. Whatever works. They’re evil like every other industry