Social media algorithms ‘amplifying misogynistic content’ (www.theguardian.com)
from stopthatgirl7@kbin.social to technology@lemmy.world on 06 Feb 2024 08:21
https://kbin.social/m/technology@lemmy.world/t/814810

Researchers say extreme content being pushed on young people and becoming normalised

#technology

threaded - newest

autotldr@lemmings.world on 06 Feb 2024 08:25 next collapse

This is the best summary I could come up with:


Researchers said they detected a four-fold increase in the level of misogynistic content suggested by TikTok over a five-day period of monitoring, as the algorithm served more extreme videos, often focused on anger and blame directed at women.

Meanwhile, the mother of murdered teenager, Brianna Ghey, called for social media apps to be banned on smartphones for under-16s after hearing evidence about the online activities of her daughter’s killers.

Geoff Barton, general secretary of the Association of School and College Leaders, which collaborated with the research, said: “UCL’s findings show that algorithms – which most of us know little about – have a snowball effect in which they serve up ever-more extreme content in the form of entertainment.

“This is deeply worrying in general but particularly so in respect of the amplification of messages around toxic masculinity and its impact on young people who need to be able to grow up and develop their understanding of the world without being influenced by such appalling material.

“We call upon TikTok in particular and social media platforms in general to review their algorithms as a matter of urgency and to strengthen safeguards to prevent this type of content, and on the government and Ofcom to consider the implications of this issue under the auspices of the new Online Safety Act.”

“It couldn’t be clearer that the regulator Ofcom needs to take bold and decisive action to tackle high-risk algorithms that prioritise the revenue of social media companies over the safety and wellbeing of teens.”


The original article contains 963 words, the summary contains 252 words. Saved 74%. I’m a bot and I’m open source!

yeah@lemmy.world on 06 Feb 2024 09:28 collapse

Good bot

postnataldrip@lemmy.world on 06 Feb 2024 09:25 next collapse

It’s well-known that these algorithms push topics to drive engagement, and naturally things that make people angry or frightened or disgusted etc enough are more likely to be engaged with regardless of what that topic is.

kat_angstrom@lemmy.world on 06 Feb 2024 10:57 next collapse

When outrage is the prime driver of engagement it’s going to push some people right off the platform entirely, and the ones who stay are psychologically worse off for it.

Imgonnatrythis@sh.itjust.works on 06 Feb 2024 12:45 collapse

Social media execs, “we’ve done the math and it’s worth it”

kat_angstrom@lemmy.world on 06 Feb 2024 15:00 collapse

Worth it for them, for short term profits. Good thing nobody is considering the net effect this has on society or political discourse.

JoBo@feddit.uk on 06 Feb 2024 13:57 collapse

They could certainly do with a control group or three. The point they’re trying to make is that over 5 days of watching recommended videos the proportion that were misogynistic grew from 13% on day 1 to 52% on day 5. That suggests a disproportionate algorithmic boost but it’s hard to tell how much that was caused by the videos they chose to view.

A real world trial ought to be possible. You could recruit thousands of kids to just do their own thing and report back. It’s a very hard question to study in the lab because it’s nothing like the real world.

xc2215x@lemmy.world on 06 Feb 2024 13:04 next collapse

That is a shame.

small44@lemmy.world on 06 Feb 2024 13:51 next collapse

They push everything negative. I always pick the chronological feed

snooggums@kbin.social on 06 Feb 2024 14:02 next collapse

They push the stuff that people spend more time interacting with. People tend to interact more with negative stuff.

small44@lemmy.world on 06 Feb 2024 14:12 collapse

Facebook could modify the algorithm to detect if a post is negative and discart them.

snooggums@kbin.social on 06 Feb 2024 14:53 next collapse

They could in theory, but that would drive down engagement and they would make less money.

It is pretty hard to identify negative posts separately from hyperbolic exaggeration though. How do you tell ridiculous rage bait from a good Onion article when the only real difference in context is who posted it?

bionicjoey@lemmy.ca on 06 Feb 2024 15:47 collapse

Why would they do that?

GarytheSnail@programming.dev on 06 Feb 2024 18:37 collapse

Just like this sub. The only shit getting posted on it is articles about shitty things happening.

afunkysongaday@lemmy.world on 06 Feb 2024 19:39 collapse

I invite everyone to have a critical look at the study. www.ascl.org.uk/ASCL/media/…/Safer-scrolling.pdf

Personally, they lost me on page 12.