Instagram actively helping spread of self-harm among teenagers, study finds (www.theguardian.com)
from rimu@piefed.social to technology@lemmy.world on 30 Nov 20:14
https://piefed.social/post/350943

Danish researchers created a private self-harm network on the social media platform, including fake profiles of people as young as 13 years old, in which they shared 85 pieces of self-harm-related content gradually increasing in severity, including blood, razor blades and encouragement of self-harm.

The aim of the study was to test Meta’s claim that it had significantly improved its processes for removing harmful content, which it says now uses artificial intelligence (AI). The tech company claims to remove about 99% of harmful content before it is reported.

But Digitalt Ansvar (Digital Accountability), an organisation that promotes responsible digital development, found that in the month-long experiment not a single image was removed.

rather than attempt to shut down the self-harm network, Instagram’s algorithm was actively helping it to expand. The research suggested that 13-year-olds become friends with all members of the self-harm group after they were connected with one of its members.

Comments

#technology

threaded - newest

Suspiciousbrowsing@kbin.melroy.org on 30 Nov 21:17 next collapse

How on earth did that pass the ethics application

rimu@piefed.social on 30 Nov 21:24 next collapse

They probably had no idea it would be this bad.

Kolanaki@yiffit.net on 30 Nov 22:33 collapse

Thought: “They probably do something, but I doubt the claims of 99%.”

Reality: “They aren’t doing shit!”

asbestos@lemmy.world on 01 Dec 01:00 collapse

Hey, the algorithm hides the image if it contains words like “death”, it’s all good

OutlierBlue@lemmy.ca on 30 Nov 22:30 next collapse

Maybe the ethics board uses AI, claiming to remove about 99% of harmful studies before they are approved.

rowinxavier@lemmy.world on 01 Dec 00:12 next collapse

The claim by Meta that they block this type of material combined with the existing spread of this type of material mean that adding a temporary source of material does not carry the same level of harm as may be expected. Testing if Meta does in fact remove this type of content and finding it failing may reasonably be expected to lead to changes which would reduce the amount of this type of material. The net result is a very small, essentially marginal increase in the amount of self harm material and a fuller understanding of the efficacy of Meta filtering systems. If I were on the ethics board I would approve.

Starbuncle@lemmy.ca on 01 Dec 02:11 collapse

Plus, if it did work the way it was supposed to, there would be zero harm done.

ilmagico@lemmy.world on 01 Dec 19:25 collapse

The group was private and they created fake profiles … did I miss something?

TseseJuer@lemmy.world on 04 Dec 19:50 collapse

yea you did. the “fake” profiles could’ve been made by any one and sent that to non private groups and should’ve been blocked. them being “fake accounts” doesn’t take away from the claims meta makes about 99% of this type of content being removed. please use your brain

FlashMobOfOne@lemmy.world on 30 Nov 21:55 next collapse

At least one country on earth is starting to get serious about regulating social media. Until there are real financial consequences for this, there won’t be any meaningful change.

GhiLA@sh.itjust.works on 30 Nov 22:14 next collapse

It won’t end and will continue until society collapses because we never learn anything.

Your best recourse is to do everything in your power as a parent to prevent your child from using this garbage considering it’s here to stay.

hedgehogging_the_bed@lemmy.world on 01 Dec 14:22 collapse

Bullshit. Our best recourse as parents is to talk to our children every day to ensure their life has people who will listen and understand them as a constant presence, instead of random strangers on the Internet. Just exposure to this shit isn’t the toxic part. It’s the constant exposure without context and support of caring adults to help kids contextualize the information. Just like sex, alcohol, and every other complex “adult” thing.

GhiLA@sh.itjust.works on 01 Dec 20:16 collapse

Bullshit. Our best course of action is to ditch technology entirely, and live as farmers in a communal society that seeks a symbiotic exposure to nature and a closer attachment to family and neighbors. We’d all have better sex, better alcohol and more artisanal adult things.

crosses arms

The one-upping crap is cringe and the most Lemmy thing on earth and I wish we’d stop it.

You can expand on a conversation without drawing a sword on the last guy.

jerry@my-place.social on 30 Nov 22:26 next collapse

@rimu This is so disturbing and wrong

homesweethomeMrL@lemmy.world on 30 Nov 22:33 next collapse

In other news, science has indications the sun may be hot as a muthafucka.

empireOfLove2@lemmy.dbzer0.com on 30 Nov 22:48 next collapse

I’m pretty sure anyone who has scrolled reels for more than 5 minutes could have told you the same thing. That place is the wild west.

potentiallynotfelix@lemmy.fish on 30 Nov 23:40 next collapse

brick by brick

hash@slrpnk.net on 01 Dec 01:41 next collapse

Meta will play damage control and introduce a feature which might help a little for a few weeks. There are other options on the table internally which might actually have a meaningful effect, but they would significantly pull down engagement so…

half_fiction@lemmy.dbzer0.com on 01 Dec 02:56 collapse

This is a complicated topic for me. I’m 35 so my experience is obviously different than today, but I self-harmed from age 12 into my 20s. Finding community and understanding in self-harm & mental illness-focused communities was transformative for me, especially in my younger teens. Many days/months/years this community felt like the only reason I was still hanging on.

Obviously I am not in favor of the “encouragement” of self-harm, but I also wonder how much nuance is applied when categorizing content as such. For example, is someone who posts about how badly they want to self-harm “encouraging” this? Or are they just seeking support? Idk. I have no answers. I just think about how even bleaker my teens would have felt had I not found my pockets of community on the early internet. On the other hand, sometimes I do wonder if we subconsciously egged each other on. Perhaps the trajectory of my mental health journey would have been different had I not found them. That’s not something I can ever be sure about, but I think given my home life and all the things I was going through already, if anything, my mental illness might have just manifested itself in a different way, like through substance abuse issues or an eating disorder or something. (And to be clear, I was hurting myself before I found the community, so it might have just been business as usual.) Like I said, I don’t have any answers, it just feels more nuanced to me, as someone who has lived some version of this.

Scolding7300@lemmy.world on 01 Dec 06:08 collapse

Publication: drive.usercontent.google.com/download?id=1MZrFRii…

Couldn’t get a translation in place sp asked an AI to answer what is the researchers definition of self harm: According to the report, the researchers define self-harm content as material that shows, encourages and/or romanticizes self-harm. This includes content that:

  • Expresses a desire for self-harm
  • Shares advice on self-harming behavior
  • Shows images of increasingly serious self-harm
  • Encourages others to engage in similar self-harming behavior

The self-harm content was categorized into 4 levels of increasing severity:

  1. Non-explicit image with text explicitly mentioning self-harm
  2. Depicting self-harm without blood
  3. Referring to self-harm in both text and image, without blood
  4. Illustrating severe self-harm involving blood in both text and image/video

So their definition covers a spectrum from text references to self-harm all the way to explicit visual depictions of serious self-harm acts involving blood. The categories represent an increasing degree of overtness in the self-harm content. ^1