New York bans “addictive feeds” for teens (www.theverge.com)
from jeffw@lemmy.world to technology@lemmy.world on 21 Jun 03:33
https://lemmy.world/post/16757959

#technology

threaded - newest

autotldr@lemmings.world on 21 Jun 03:35 next collapse

This is the best summary I could come up with:


New York Governor Kathy Hochul (D) signed two bills into law on Thursday that aim to protect kids and teens from social media harms, making it the latest state to take action as federal proposals still await votes.

Florida Governor Ron DeSantis ®, for example, signed into law in March a bill requiring parents’ consent for kids under 16 to hold social media accounts.

The bill instructs the attorney general’s office to lay out appropriate age verification methods and says those can’t solely rely on biometrics or government identification.

A California court blocked that state’s Age-Appropriate Design Code last year, which sought to address data collection on kids and make platforms more responsible for how their services might harm children.

NetChoice vice president and general counsel Carl Szabo said in a statement that the law would “increase children’s exposure to harmful content by requiring websites to order feeds chronologically, prioritizing recent posts about sensitive topics.”

Adam Kovacevich, CEO of center-left tech industry group Chamber of Progress, warned that the SAFE for Kids Act will “face a constitutional minefield” because it deals with what speech platforms can show users.


The original article contains 746 words, the summary contains 188 words. Saved 75%. I’m a bot and I’m open source!

conciselyverbose@sh.itjust.works on 21 Jun 20:19 collapse

NetChoice vice president and general counsel Carl Szabo said in a statement that the law would “increase children’s exposure to harmful content by requiring websites to order feeds chronologically, prioritizing recent posts about sensitive topics.”

What the fuck kind of ridiculously motivated logic is this?

magic_smoke@links.hackliberty.org on 21 Jun 04:24 next collapse

Neat, how about actually making some sweeping regulations tackling corporate EULA-washed malware?

Why do companies get to keep injecting spyware and even rootkits into their OS/software without ever explaining the consequences in a way a lay person can understand?

Used to be when companies did that they got punished. Anyone remember that Sony BMG case with rootkit enabled DRM, or BonziBuddy who’s EULA allowed developers to sell your information to advertisers?

Remember the fucking stink people threw over them? Remember the fucking lawsuits? This shit is just a normal Tuesday for MFAANG. Shit even fucking video games are pushing rootkits down your throat these days. They need to be spanked BAD.

bobs_monkey@lemm.ee on 21 Jun 05:06 collapse

Doesn’t help when those who write the laws are on the take.

That said, as difficult as it is these days, avoid all of these company’s products like the plague.

regrub@lemmy.world on 21 Jun 04:41 next collapse

The data protection laws are good, but a lot of the other bills for banning dark patterns and other annoying “features” sound difficult to enforce

PhlubbaDubba@lemm.ee on 21 Jun 05:22 collapse

Eh, whackamole enforcement usually cuts mustard with this kind of stuff.

Like yeah someone’s gonna do it anyways just because, but then all it takes is enough people raising an alarm to bring it down, and as a side effect, remove more shitass developers from the market.

You end up with an equilibrium where not every example is getting the hammers of justice, but enough examples are that the average consumer still feels the benefit of a noticeably less toxic internet.

NateNate60@lemmy.world on 21 Jun 06:45 collapse

The effectiveness of bans has always hinged on two factors:

  • The likelihood of being caught
  • The severity of punishment if caught

For example, everyone knows that the odds of being caught speeding are pretty low, but if the punishment for speeding is ten years imprisonment, then very few people will risk speeding.

Similarly, even if the odds of getting caught violating this law is only 1%, if the punishment is banning the platform and shutting down the company along with a fine equal to a year’s worth of revenue, then companies will probably not want to risk it.

onion@feddit.de on 21 Jun 08:55 collapse

I’ve heard the severity actually doesn’t work as deterrent, people tend to assume they don’t get caught

FreakinSteve@lemmy.world on 21 Jun 04:48 next collapse

On the upside they have the third largest army in the world taking up all of their resources and pissing all over the citizens

ryper@lemmy.ca on 21 Jun 06:19 next collapse

Why are “addictive feeds” OK for adults?

otp@sh.itjust.works on 21 Jun 06:31 next collapse

They aren’t, but adults are allowed to decide about that addiction

NateNate60@lemmy.world on 21 Jun 06:41 next collapse

I think it’s also the case that it has a bigger impact on developing brains, who might be more easily addicted.

I don’t have any evidence for this, I’m just guessing here.

Fedizen@lemmy.world on 21 Jun 06:54 collapse

kind of like we’re allowed to decide whether to do heroin or not…

AProfessional@lemmy.world on 21 Jun 10:42 next collapse

It has been decriminalized in recent past, it’s not off the table.

otp@sh.itjust.works on 21 Jun 14:01 next collapse

In New York? Interesting.

But if you’re being facetious, that’s why I specified “that” addiction.

I mean, if they banned everything that was harmful and addictive, then nicotine and alcohol would be banned too. But clearly they don’t. Yet those are banned for children.

sugar_in_your_tea@sh.itjust.works on 22 Jun 14:38 collapse

I think that should also be allowed, but only under medical supervision. Ideally, there should be a legal path to pretty much anything you want to do, with controls in place to protect the public from your choices.

Doomsider@lemmy.world on 21 Jun 06:48 next collapse

They are not. It is past time to call them on their psychological manipulation bullshit. Addictive feeds are also just the tip of the giant shitberg that corporate run social media has become.

rottingleaf@lemmy.zip on 21 Jun 14:31 collapse

Yes, but then eventually you’ll have to ban half the usual ads if applying the principle consistently, or it’ll be no good otherwise, cause they’ll manage to weasel out.

Social media are just what happens when a few gigantic non-transparent organizations get the usual Goebbels powers plus the ability to match people with content, people with people, people with groups as they see fit.

It’s both a legal and a technical problem. The legal part is about making this no-no. The technical part is by having truly decentralized asynchronous social media. Federation, like with ActivityPub, is insufficient, it has to be homogenous. I mean, I’d like it to use ActivityPub-connected servers as authentication providers and for contact directory, but not the rest.

EDIT: I think freenet.org , as in Locutus and not old Freenet we all love, is aimed at this exactly.

littlewonder@lemmy.world on 21 Jun 07:01 next collapse

Are you kidding? There’s a whole “YOU CAN’T TELL ME WHAT TO DO” segment of the US population that cries until they die of dehydration every time the government says “regulation” three times in a mirror.

aisteru@lemmy.aisteru.ch on 21 Jun 08:08 next collapse

Same reason “addictive liquids” are OK for adults: they sell

EngineerGaming@feddit.nl on 21 Jun 09:09 next collapse

Because this is an excuse to KYC all users.

IsThisAnAI@lemmy.world on 21 Jun 09:23 collapse

Because you don’t have the votes for your fascist nanny state.

captainjaneway@lemmy.world on 21 Jun 06:30 next collapse

How do they prove your age? Non-technical savvy people probably just give their kids a phone and don’t do much to lock it down.

Spotlight7573@lemmy.world on 22 Jun 10:16 collapse

From the description of the bill law (bold added):

legislation.nysenate.gov/pdf/bills/2023/S7694A

To limit access to addictive feeds, this act will require social media companies to use commercially reasonable methods to determine user age. Regulations by the attorney general will provide guidance, but this flexible standard will be based on the totality of the circumstances, including the size, financial resources, and technical capabilities of a given social media company, and the costs and effectiveness of available age determination techniques for users of a given social media platform. For example, if a social media company is technically and financially capable of effectively determining the age of a user based on its existing data concerning that user, it may be commercially reasonable to present that as an age determination option to users. Although the legislature considered a statutory mandate for companies to respect automated browser or device signals whereby users can inform a covered operator that they are a covered minor, we determined that the attorney general would already have discretion to promulgate such a mandate through its rulemaking authority related to commercially reasonable and technologically feasible age determination methods. The legislature believes that such a mandate can be more effectively considered and tailored through that rulemaking process. Existing New York antidiscrimination laws and the attorney general’s regulations will require, regardless, that social media companies provide a range of age verification methods all New Yorkers can use, and will not use age assurance methods that rely solely on biometrics or require government identification that many New Yorkers do not possess.

In other words: sites will have to figure it out and make sure that it’s both effective and non-discriminatory, and the safe option would be for sites to treat everyone like children until proven otherwise.

sugar_in_your_tea@sh.itjust.works on 22 Jun 14:40 collapse

So they’re all going to request, store, and sell even more personally identifiable information.

Spotlight7573@lemmy.world on 23 Jun 10:11 collapse

No, no, no, it’s super secure you see, they have this in the law too:

Information collected for the purpose of determining a covered user’s age under paragraph (a) of subdivision one of this section shall not be used for any purpose other than age determination and shall be deleted immediately after an attempt to determine a covered user’s age, except where necessary for compliance with any applicable provisions of New York state or federal law or regulation.

And they’ll totally never be hacked.

sugar_in_your_tea@sh.itjust.works on 23 Jun 13:11 collapse

And that exception seems like companies could say something like, “but what about second verification?”

Nope, I don’t trust companies to actually delete stuff.

IsThisAnAI@lemmy.world on 21 Jun 09:21 next collapse

This shit applies directly to lemmy. Y’all seem to be blinded by your hate of TikTok.

Ibuthyr@discuss.tchncs.de on 21 Jun 10:06 next collapse

I honestly wouldn’t mind. Addictive feeds, no matter on which platform, are poison for a developing mind. The first generations that suffer from an upbringing under addictive feeds are showing apathy towards pretty much anything. And they’re easily brainwashed. Just look at the EU elections. The teens prodominantly voted the far right. You dont do that if you are of sound mind.

Toribor@corndog.social on 21 Jun 11:11 next collapse

The problem is algorithmically driven content feeds and the lack of transparency around them. These algorithms drive engagement which prioritizes content that makes people angry, not content that make people happy. These feeds are full of misinformation, conspiratorial thinking, rage bait, and other negativity with very little user control to protect themselves, curate the feed or to have neutral access to news and politics.

Lemmy sorts content very simply based on user upvotes. If you want to know why you’re seeing a post you can see exactly who upvoted it and what instances that traffic came from. It’s not immune to being manipulated but it can’t be done secretly or in a centralized way.

Yet based on their actions we already know that Facebook has levers they can pull to directly affect the amount of news people see about a specific topic, let alone the source of information on that topic. These big social media companies guard these proprietary algorithms that are directly determining what news people see on a massive scale. Sure they claim to be a neutral arbiter of content that just gives people what they want but why would anyone believe them?

Lemmy is not the same thing, though it’s not without its own problems.

IsThisAnAI@lemmy.world on 21 Jun 11:32 collapse

Lemmy has hot and top. All of these fall into addicting algorithms.

rimu@piefed.social on 21 Jun 11:35 next collapse

How do you know that?

Toribor@corndog.social on 21 Jun 12:21 next collapse

Here is a bit of information on how Lemmy’s “Hot” sorting works.

I’m not going to argue about how addictive any specific feed or sorting method is, but this method is content neutral, does not adjust based on user behavior (besides which communities you subscribe to) and is completely transparent as all post interactions are public. With this type of sorting users can be sure that certain content is not prioritized over others (outside of mod actions which are also public). Having a more neutral straightforward ranking system that isn’t based on user behavior reduces addictiveness and is less likely to form echo chambers. This makes it easier to see more diverse content, reduces the spread of misinformation and is much more difficult to manipulate.

AstralPath@lemmy.ca on 21 Jun 13:03 collapse

Thank you for posting this crucial context for the algorithms. I didn’t even know this information was available.

jeffw@lemmy.world on 21 Jun 19:55 collapse

Except there’s no company (possibly pressured by governments) manipulating what shows up in those places and it’s all transparent algorithms.

bionicjoey@lemmy.ca on 21 Jun 12:11 collapse

The algorithm here is pretty simple. It’s an open source project and you can go directly see that the code isn’t designed to maximise the user’s engagement, but rather simply to elevate more recent or popular posts. Sites like YouTube and Facebook develop much more complex recommendation algorithms with the goal of keeping people on their platform as long as possible.

foremanguy92_@lemmy.ml on 21 Jun 09:55 next collapse

There is ONE problem, the parents.

You shouldn’t disallow everything in your state… The parents should be educated about it and say stop to their kids, these days parents are submerged by all the techs stuff and they don’t understand anything. Sadly. The problem is more global, even 20-25 years old adults do nothing about their days because they doom-scroll… This shouldn’t be banned (but yeah the companies behind it are evil to do some evil stuffs), after they will banned everything.

Vincente@lemmy.world on 21 Jun 11:39 next collapse

Just for kids? It needs to be totally banned for everyone!

_sideffect@lemmy.world on 21 Jun 12:32 next collapse

Everyone needs to stop doom scrolling. It adds nothing to your life and just makes other people money instead.

matthewmercury@reddthat.com on 21 Jun 14:56 collapse

These kids today, always writing things down and reading them. Scrolls! In my day we remembered things! Remember that? Course not, we didn’t write it down and your memories are all mush because kids these days are always writing things down and reading them!

UncommonBagOfLoot@lemmy.world on 22 Jun 10:51 collapse

Reminded me of this bit from Barge Ballad

Oh you’ve got to remember

Way up atop the mast

Knowing all the river routes

That you never learn from the charts

Well I do remember

fireweed@lemmy.world on 21 Jun 14:35 next collapse

All I want to know is: will this push companies to rethink infinite scroll? Like, even to make it a toggleable option.

I really appreciate that Lemmy still has distinct pages. “I’ll stop at the end of this page” is the easiest way to quit a social media session, which is why most companies have eliminated it.

sugar_in_your_tea@sh.itjust.works on 22 Jun 14:35 collapse

I find it annoying and prefer my mobile app’s infinite scroll. I just rarely look at all, and my curated list of subscriptions run dry after a few minutes anyway.

The same was true on Reddit. The right way to use link aggregators, imo, is to curate a feed to minimize noise, and quit when you run out.

uriel238@lemmy.blahaj.zone on 21 Jun 18:51 next collapse

It raises the question what does or doesn’t count as an addictive feed. I bet this doesn’t specify any particular dark pattern or monetization model.

If we gave half a fuck about mental wellnes regarding mobile use, we would have addressed all this when it was particular to mobile games.

No, this is about our kids learning early how fucked society is, and how their own generation is being fed a pro-ownership-class indoctrination regimen before being appointed a string of dead-end toxic jobs.

Social media is how we learn about the genocide in Gaza, police officer-involved homicide rates, and unionization efforts. and that is why we want kids off social media.

Don’t make me put up the koala cartoon again.

sugar_in_your_tea@sh.itjust.works on 22 Jun 14:34 collapse

I agree with the first two paragraphs, but the rest really feels like you projecting your political and social outlook on the situation.

Social media provides and strengthens biased worldviews and confirmation bias. If you stick to social media, you’ll think violent crime is rapidly increasing (in the US), but it’s actually down. It’s still a problem, but it’s being used to push political agendas that don’t actually solve the problem. For example, banning bump stocks, which are almost never used in mass shootings (most of those are handguns), and make guns way less accurate. Only enthusiasts get them, and pretty much only for range use. And you also have to pull the trigger each time, it just makes that easier (can get the same effect with a rubber band…). It’s also how we got the anti-vax movement and various other conspiracies.

Social media is one way to get less censored news, but it’s unreliable and tends to lead to echo chambers. We should instead be pushing to get government and political bias out of news reporting (or at least make bias explicit), not protect the less trustworthy, biased social media based news sources. There are countless examples of large social media sources providing incorrect information, and never correcting it, and the false information gets more views than the correct information. Social media drives people toward radicalization, and it’s largely how we got Trump.

Social media is a liability. Mobile games are too. Parents should be restricting their children’s access to both (we do), and instead teaching children to recognize bias and find good information (we’re working on that, but they’re still young).

sugar_in_your_tea@sh.itjust.works on 22 Jun 14:45 next collapse

This is just going to end one of two ways:

  • companies storing and selling even more personally identifiable information
  • kids lying

Probably both.

So I’m going with no. I’m a responsible parent and I’m preventing my kids from accessing social media and teaching them how to find reliable information. As they earn my trust with other services, I’ll slowly remove restrictions. If I think my kids are ready for SM, I’ll let them have access, using a VPN to avoid state restrictions as needed.

Spotlight7573@lemmy.world on 23 Jun 10:14 next collapse

For scenario one, they totally need to delete the data used for age verification after they collect it according to the law (unless another law says they have to keep it) and you can trust every company to follow the law.

For scenario two, that’s where the age verification requirements of the law come in.

sugar_in_your_tea@sh.itjust.works on 23 Jun 13:09 collapse

You’ve never heard of kids getting fake IDs?

This law doesn’t stipulate how services prove age (at least according to the article), and if kids want something, they’ll find a way to get it.

MenacingPerson@lemm.ee on 25 Jun 01:00 collapse

What’s SM?

sugar_in_your_tea@sh.itjust.works on 25 Jun 02:15 collapse

Social media.

cupcakezealot@lemmy.blahaj.zone on 22 Jun 15:05 collapse

i think addictive feeds on social should be banned but the problem is bill is so open to interpretation theres no way to enforce it

postmateDumbass@lemmy.world on 22 Jun 15:58 collapse

Lay’s Potato Chips are worried.