stembolts@programming.dev
on 23 Sep 2024 19:17
nextcollapse
“Wow Johnson, no matter how much biased data we feed this thing it just keeps repeating biases from human society.”
Sample input from a systematically racist society (the entire world), get systematically racist output.
No shit. Fix society or “tune” your model, whatever that entails…
Obviously only one of these is feasible from a developer perspective.
Gaywallet@beehaw.org
on 23 Sep 2024 19:21
nextcollapse
While it may be obvious to you, most people don’t have the data literacy to understand this, let alone use this information to decide where it can/should be implemented and how to counteract the baked in bias. Unfortunately, as is mentioned in the article, people believe the problem is going away when it is not.
stembolts@programming.dev
on 23 Sep 2024 19:28
nextcollapse
.
Gaywallet@beehaw.org
on 23 Sep 2024 19:40
collapse
I suppose to wrap up my whole message in one closing statement : people who deny systematic inequality are braindead and for whatever reason, they were on my mind while reading this article.
In my mind, this is the whole purpose of regulation. A strong governing body can put in restrictions to ensure people follow the relevant standards. Environmental protection agencies, for example, help ensure that people who understand waste are involved in corporate production processes. Regulation around AI implementation and transparency could enforce that people think about these or that it at the very least goes through a proper review process. Think international review boards for academic studies, but applied to the implementation or design of AI.
I’ll be curious what they find out about removing these biases, how do we even define a racist-less model? We have nothing to compare it to
AI ethics is a field which very much exists- there are plenty of ways to measure and define how racist or biased a model is. The comparison groups are typically other demographics… such as in this article, where they compare AAE to standard English.
leisesprecher@feddit.org
on 23 Sep 2024 19:48
collapse
The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.
We’re like teenaged trailer trash parents who just gave birth to a genius at the trailer park where we’re all dysfunctional alcoholics and meth addicts …
… now we’re acting surprised that our genius baby talks like an idiot after listening to us for ten years.
orca@orcas.enjoying.yachts
on 23 Sep 2024 21:13
nextcollapse
Shit in, shit out. That’s AI. You can’t guarantee a single thing it says is true, and you have to play whack-a-mole forever to get it to behave. Imagine knowing this and still investing time and money in it. We could be investing that in education and making the human experience better, but instead we’re stuck watching capitalists harness it to replace people, and shove half-baked ideas out the door as finished products.
Look, I love tech. I’ve worked in tech for 20 years. I’ve built apps that use AI. It’s the one tech that I despise watching capitalists have control of. It’s just chatbots all the way down that don’t know what they’re regurgitating, and eventually they’re going to be vacuuming up nothing but other AI content. That is going to be the future. Just bots talking to other bots. Everything completely devoid of humanity.
ContrarianTrail@lemm.ee
on 24 Sep 2024 04:18
nextcollapse
Replace “AI” with “humans” and this rant is still perfectly coherent.
“On two occasions I have been asked, – “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”
—Charles Babbage, on his analytical engine, 1864
AnarchistArtificer@slrpnk.net
on 24 Sep 2024 16:26
collapse
I need to add that to my quotes book, it’s great
bobburger@fedia.io
on 23 Sep 2024 23:02
nextcollapse
I like that they say "outdated" stereotypes like they used to be true but now they aren't.
Come on people, keep your steroetypes current.
miracleorange@beehaw.org
on 24 Sep 2024 03:05
collapse
You mean the problems that experts said 10+ years ago would happen are happening?
threaded - newest
“Wow Johnson, no matter how much biased data we feed this thing it just keeps repeating biases from human society.”
Sample input from a systematically racist society (the entire world), get systematically racist output.
No shit. Fix society or “tune” your model, whatever that entails…
Obviously only one of these is feasible from a developer perspective.
While it may be obvious to you, most people don’t have the data literacy to understand this, let alone use this information to decide where it can/should be implemented and how to counteract the baked in bias. Unfortunately, as is mentioned in the article, people believe the problem is going away when it is not.
.
In my mind, this is the whole purpose of regulation. A strong governing body can put in restrictions to ensure people follow the relevant standards. Environmental protection agencies, for example, help ensure that people who understand waste are involved in corporate production processes. Regulation around AI implementation and transparency could enforce that people think about these or that it at the very least goes through a proper review process. Think international review boards for academic studies, but applied to the implementation or design of AI.
AI ethics is a field which very much exists- there are plenty of ways to measure and define how racist or biased a model is. The comparison groups are typically other demographics… such as in this article, where they compare AAE to standard English.
The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.
I have a feeling that’s the point with a lot of their use cases, like RealPage.
It’s not a criminal act when an AI did it! (Except it is and should be.)
“It’s not redlining when an algorithm does it!”
This is thing I keep pointing out about AI
We’re like teenaged trailer trash parents who just gave birth to a genius at the trailer park where we’re all dysfunctional alcoholics and meth addicts …
… now we’re acting surprised that our genius baby talks like an idiot after listening to us for ten years.
Shit in, shit out. That’s AI. You can’t guarantee a single thing it says is true, and you have to play whack-a-mole forever to get it to behave. Imagine knowing this and still investing time and money in it. We could be investing that in education and making the human experience better, but instead we’re stuck watching capitalists harness it to replace people, and shove half-baked ideas out the door as finished products.
Look, I love tech. I’ve worked in tech for 20 years. I’ve built apps that use AI. It’s the one tech that I despise watching capitalists have control of. It’s just chatbots all the way down that don’t know what they’re regurgitating, and eventually they’re going to be vacuuming up nothing but other AI content. That is going to be the future. Just bots talking to other bots. Everything completely devoid of humanity.
Replace “AI” with “humans” and this rant is still perfectly coherent.
“On two occasions I have been asked, – “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”
—Charles Babbage, on his analytical engine, 1864
I need to add that to my quotes book, it’s great
I like that they say "outdated" stereotypes like they used to be true but now they aren't.
Come on people, keep your steroetypes current.
You mean the problems that experts said 10+ years ago would happen are happening?
<img alt="" src="https://beehaw.org/pictrs/image/10b95ca1-536c-4588-ad94-0907bdbe0852.webp">