UK Plans AI Experiment on Children Seeking Asylum (www.hrw.org)
from Pro@programming.dev to technology@lemmy.world on 01 Aug 17:18
https://programming.dev/post/34926893

Experimenting with unproven technology to determine whether a child should be granted protections they desperately need and are legally entitled to is cruel and unconscionable.

#technology

threaded - newest

prole@lemmy.blahaj.zone on 01 Aug 18:03 next collapse

Fucking why?? Why is everyone so intent on shoehorning this half baked garbage tech into literally everything?

floofloof@lemmy.ca on 01 Aug 18:18 next collapse

Keir Starmer is trying to out-fascist the fascists, and fascists love to experiment on children.

Also, apparently capitalism doesn’t work unless everyone hypes whatever the techbros are selling this week.

db2@lemmy.world on 02 Aug 00:24 next collapse

Not everyone. Tech bros, stock bros and the incredibly stupid. Yes there’s some overlap.

MonkderVierte@lemmy.zip on 02 Aug 01:04 next collapse

Plausible deniability.

rottingleaf@lemmy.world on 02 Aug 19:49 collapse

Exactly, it’s a tool to whitewash decisions. A machine that seemingly does not exactly what it should do. A way to shake off responsibility.

And that it won’t ever work right is its best trait for this purpose. They’ll be able to blame every transgression or wrong where they are caught on an error in the system, and get away with the rest.

At least unless it’s legally equated to using Tarot cards for making decisions affecting lives. That should disqualify the fiend further as a completely inadequate human being, not absolve them of responsibility.

AcidiclyBasicGlitch@sh.itjust.works on 04 Aug 02:02 collapse

Bc they’ve already sunk too much money into it thinking that if they fed it enough data it would suddenly develop superintelligence, and nobody wants to admit it is likely decades away from being what they advertised (if it ever reaches that point at all).

Their solution is to just keep throwing more money and data at it until they eventually make it work, or they kill us all trying. Which do you think will happen first?

pyre@lemmy.world on 01 Aug 18:40 next collapse

don’t buy this bullshit. i guarantee there’s no experiment, and probably no “AI” in the common sense of the word being used today. this is 100% going to be a deny-o-matic, because they’d rather say “the almighty AI determined it” than “we hate children”. This is the same thing united healthcare did which led to the famous—and very popular—deposition of its CEO, and also what Israel claims to be a targeting system while they’re commiting warcrimes on top of a genocide.

panda_abyss@lemmy.ca on 01 Aug 18:42 next collapse

There’s a very high chance of racial bias issues here

Venator@lemmy.nz on 02 Aug 19:15 collapse

that seems like it would be the main motivation for using it …

betterdeadthanreddit@lemmy.world on 02 Aug 05:33 next collapse

I won’t blame those kids one bit when their superpowers kick in and they start telekinetically shaking our cities to dust.

[deleted] on 02 Aug 22:28 next collapse

.

AcidiclyBasicGlitch@sh.itjust.works on 04 Aug 02:14 collapse

Companies that tested their technology in a handful of supermarkets, pubs, and on websites set them to predict whether a person looks under 25, not 18, allowing a wide error margin for algorithms that struggle to distinguish a 17-year-old from a 19-year-old.

AI face scans were never designed for children seeking asylum, and risk producing disastrous, life-changing errors. Algorithms identify patterns in the distance between nostrils and the texture of skin; they cannot account for children who have aged prematurely from trauma and violence. They cannot grasp how malnutrition, dehydration, sleep deprivation, and exposure to salt water during a dangerous sea crossing might profoundly alter a child’s face.

Goddamn, this is horrible. Imagine leaving shitty AI to determine the fate of this girl :

‘Psychologically broken,’ 8-year-old Sama loses her hair