Pogogunner@sopuli.xyz
on 07 Aug 18:46
nextcollapse
Only corporate executives benefit from AI.
Everyone else is harmed, both directly and indirectly, and it makes the customer experience far worse because workers are replaced by chatbots that are incapable of understanding.
From what I’ve seen, the only people that have a positive view of AI are those who see themselves as the master of others. The trans, nonbinary, & disabled people in this study are very unlikely to fit that mold.
Fredselfish@lemmy.world
on 07 Aug 19:02
nextcollapse
Sorry cis male here I find AI negatively. This article is bullshit trying to separate us. We should all be appalled by AI.
The survey concludes that transgender, non-binary and disabled people tend to view AI more negatively than others. Nothing about this statistical result implies that there aren’t plenty of people in other groups, like you, who view AI negatively. It makes a claim about statistical trends.
If you read a report on statistical trends and repond “That’s bullshit! It doesn’t describe me!” it suggests you’re missing the point of surveys and statistics. The survey is quite compatible with many non-disabled cis men disliking AI a lot.
You’re probably overthinking this. To me it shows that this technology, which is often advertised as a tool to level the playing field and “democratize” industries is viewed as doing the exact opposite. Those who should benefit the most from it feel threatened and it looks like we’re moving closer to techno feudalism by the day as LLMs are squeezed into everything without thought. Fascists tend to love AI, use AI and advance AI. That makes the technology a natural enemy of many minorities.
Plenty of non-chatbots applications that do help the regular user though. Even if most of it is corporate bullshit, some of it isn’t.
Like for example VLC using it for subtitles on any video you have, in any language you want and automatically synced correctly. That they hallucinate a couple of times doesn’t really matter in that context.
tanisnikana@lemmy.world
on 07 Aug 19:03
nextcollapse
Trans lady here, appalled by AI! A lot of the middle management I work with are eager for it, and since I work in M365 administration, my boss keeps compelling me to flip the CoPilot switch to “on” for people.
I hate it.
NocturnalMorning@lemmy.world
on 07 Aug 19:11
nextcollapse
Human being checking in here, I am appalled by the current usage of AI.
Why is a statistical survey bullshit because of your personal view on the matter? Where does the survey imply that transgender, nonbinary and disabled people are the only ones who dislike AI?
The graphic shows that every group has attitudes that are somewhere between completely negative and completely positive. The groups mentioned are just a bit more negative than the others.
NocturnalMorning@lemmy.world
on 07 Aug 19:42
nextcollapse
Because it singles people out for no reason. There is absolutely no reason to do a study like this that focuses on marginalized groups. Does this study make these marginalized groups lives better somehow by putting this information out there? Not a chance.
Research for the sake of doing research is assinine, and its rampant in academia. We have a publish or perish attitude in academia that is so pervasive its sickening…ask me how I know that (my partner is a professor)
And we basically all but force people to write papers and try to come up with novelty to justify their existence as a professor.
AI is a scourge on this earth in how we use it today. We didnt need a study to tell us that, much less to single out a few groups of people, who frankly dont need to be singled out anymore than they already have been by the Trump administration.
astutemural@midwest.social
on 07 Aug 20:06
collapse
I mean, would you not want to do this specifically to see it’s effects on marginalized groups? That seems like a pretty good reason to me.
NocturnalMorning@lemmy.world
on 07 Aug 20:15
collapse
Admittedly, I didnt read the article. I think the research is actually beneficial after reading the article, and its exactly the kind of research I think should be done on AI.
Spoke prematurely based on the headline, go figure…
AnarchistArtificer@lemmy.world
on 07 Aug 22:28
collapse
Props to you for admitting you spoke prematurely
Kn1ghtDigital@lemmy.zip
on 07 Aug 20:00
nextcollapse
The whole thing is done is bad faith to make a correlation that isn’t there. I just conducted a study that says people are always cats. My study doesn’t show any actual correlation but I think I once heard that a cat man exists so there is potential for study.
UnderpantsWeevil@lemmy.world
on 07 Aug 20:11
nextcollapse
It does feel a bit like the magazine is gunning for the “Don’t like AI? What are you, queer?” angle.
NoneOfUrBusiness@fedia.io
on 07 Aug 20:26
collapse
The article contains nothing of the sort and I have no idea why you came to that conclusion.
UnderpantsWeevil@lemmy.world
on 07 Aug 21:09
collapse
I believe that a future built on AI should account for the people the technology puts at risk.
I’ve seen various iterations of this column a thousand times before. The underlying message is always “AI is going to get shoved down your throat one way or another, so let’s talk about how to make it more palpable.”
The author (and, I’m assuming there’s a human writing this, but its hardly a given) operates from the assumption that
identities that defy categorization clash with AI systems that are inherently designed to reduce complexity into rigid categories
but fails to consider that the problem is employing a rigid, impersonal, digital tool to engage with a non-uniform human population. The question ultimately being asked is how to get a square peg through a round hole. And while the language is soft and squishy, the conclusions remain as authoritarian and doctrinaire as anything else out of the Silicon Valley playbook.
NoneOfUrBusiness@fedia.io
on 08 Aug 05:33
collapse
This is a reasonable point, but it's also not what you said previously.
That's interesting. I feel like a lone voice in my university, trying to explain to people that using LLMs to do research tasks isn't a good idea for several reasons, but I'd never imagine that being disabled would put me into a group more likely to think like that. If I had to guess, I'd suggest that there's possibly a strong network effect being abused in our social environment to make people get into the AI hype, and we, the ones who live less connected to the "standard" social norms, tend to become less vulnerable to it.
It may also be that disabled, transgender and nonbinary people are more aware of:
The use of AI to reduce people’s employment opportunities, which are already tough enough for people in these groups.
The tendency of AI to reproduce the prejudices present in its training materials. If everyone’s relying on AI then historical prejudices are going to be perpetuated just because LLMs are regurgitation machines.
vaultdweller013@sh.itjust.works
on 08 Aug 15:04
collapse
As an autistic bastard I just think it’s shit, though I will say that I do partake in the guilty pleasure of Two Scuffed and DougDoug. But I wouldn’t feel particularly bad if every bit of generative AI spontaneously corrupted and could never be replicated, generally feels like a money sink for dipshit corporations at this point.
MossyFeathers@pawb.social
on 07 Aug 20:09
nextcollapse
God, the number of people here who don’t know what “more likely” means is insane. Just because you aren’t trans, enby or disabled doesn’t mean the study is bullshit because you hate AI. It means that if you walk up to a random person and ask them about AI, they’re more likely to hate it if they exist in one of those groups.
Secondly, studies like this have value because they can clue people into issues that a community is having. If everyone is neutral about a thing, except for disabled people (who hate it), then maybe that means that the thing is having a disproportionately negative impact on disabled people. Studies like this are not unlike saying “hey, there’s smoke over there, there might be a fire.”
The thing is, EVERYONE hates AI except for a very small number of executives and the few tech people who are falling for the bullshit the same way so many fell for crypto.
It’s like saying a survey indicates that trans people are more likely to hate American ISP’s. Everyone hates them and trans people are underrepresented in the population of ISP shareholders and executives. It doesn’t say anything about the trans community. It doesn’t provide any actionable or useful information.
It’s stating something uninteresting but applying a coat of rainbow paint to try to get clicks and engagement.
NoneOfUrBusiness@fedia.io
on 07 Aug 20:21
nextcollapse
You might be living in an echo chamber. Most Americans use AI at least sometimes and plenty use it regularly according to studies.
We could argue all day over who is experiencing reality or who is in an echo chamber.
Pew Research found that US adults who are not “AI Experts” are more likely to view AI as negative and harmful.
NoneOfUrBusiness@fedia.io
on 07 Aug 20:32
nextcollapse
We could argue all day over who is experiencing reality or who is in an echo chamber.
We could, or you could read the article where it addresses exactly that point. Most demographics are slightly positive on AI, with some neutral and only nonbinary people as slightly negative. The representative US sample is at 4.5/7.
You might be living in an echo chamber. Most Americans use AI at least sometimes and plenty use it regularly according to studies.
You literally are right here accusing me of being in an echo chamber for thinking Americans view AI negatively, then when I back that up with a source you are now… Claiming that the article says that.
Except that the whole “most demographics are positive on AI” piece that you toss in counters your own countering of my disagreement. You’re talking in circles here.
It’s also worth noting this article is using a sample size of 700 and doesn’t go all that heavily into the methodology. The author describes themself as a “social computing scholar” and states that they purposefully oversampled these minority groups.
The conclusion is nothing but wasted time and clicks. You’re in this thread telling people to “read the article” and I’m in here to warn people that it’s not worth their time to do so.
And this is part of a trend I’ve noticed on Lemmy lately: people posting obviously bad articles, users commenting that the articles are bad, and usually about 3-4 other users in the comments arguing and trying to drive more engagement to the article. More clicks, more ad revenue.
On a tangent, to me as an outsider it seems that most Americans are more likely to view anything as negative. I have no scientific backing for my shitpost though.
JigglySackles@lemmy.world
on 08 Aug 14:59
collapse
It’s hard to be positive here.
PunnyName@lemmy.world
on 07 Aug 21:49
nextcollapse
The average person is not informed enough to even be aware of the problems with AI. Look at how aggressively AI is being marketed, and realize that this marketing works.
Or he’s not pushing a narrative that those individuals are ludfites afraid of tech and are dumber than others?
What’s next defining races by the lumps on thier heads?
Mengala would be proud
AnarchistArtificer@lemmy.world
on 08 Aug 14:18
collapse
My dude, do you know what statistics is? The paper doesn’t say anything of that sort. Measuring the proportion of people who hold a particular belief is nothing like what you describe
My dude. That is not science or statistics. That is having people fill out forms and having answers interpreted. Which begs the biasing questions and wording, which brings in other questions.
I used to work as a person who asked people these questions. It ain’t science, Its targeted questions.
So no. It’s not science or statistics. It’s media metrics.
Lost_My_Mind@lemmy.world
on 07 Aug 20:53
nextcollapse
Hi. Haven’t read the article. Straight middle aged white guy here. I too also view AI negatively.
If trans, nonbinary, or disabled people view AI negatively, it’s not because they’re trans, nonbinary or disabled. It’s because AI is terrible, and threatens (and already is proving to) make all of our lives terrible for the sole sake of giving billionaires a few extra pennies.
Though I will say, if trans, nonbinary and disabled people have any extra issues with AI making their life specifically worse, that’s not caused by AI itself. It’s caused by the wealthy CHOOSING to use AI to make their lives worse.
This doesn’t need to happen. None of this needs to happen. Google doesn’t need entire campuses dedicated to AI with special power requirements. This is all bullshit.
The survey discovered that people in those groups are more likely to view AI negatively than those in other groups.
If trans, nonbinary, or disabled people view AI negatively, it’s not because they’re trans, nonbinary or disabled. It’s because AI is terrible, and threatens (and already is proving to) make all of our lives terrible for the sole sake of giving billionaires a few extra pennies.
People in these groups may have different or additional reasons for viewing AI negatively that are not common to other groups. It’s a question for further research why they tend to view AI more negatively. It might very well be because they’re trans, nonbinary or disabled - perhaps for conscious reasons or perhaps because of other factors. The survey shows that there are more questions to be asked and that it would be worth paying attention to the experiences of these groups.
LordWiggle@lemmy.world
on 07 Aug 20:54
nextcollapse
Same way the other way around. Remember when grok went full mechahitler?
cupcakezealot@piefed.blahaj.zone
on 07 Aug 21:51
nextcollapse
it's because we actually listened to the plot of the matrix
kibiz0r@midwest.social
on 07 Aug 22:50
nextcollapse
These findings are consistent with a growing body of research showing how AI systems often misclassify, perpetuate discrimination toward or otherwise harmtrans and disabled people. In particular, identities that defy categorization clash with AI systems that are inherently designed to reduce complexity into rigid categories. In doing so, AI systems simplify identities and can replicate and reinforce bias and discrimination – and people notice.
Makes sense.
These systems exist to sand off the rough edges of real life artifacts and interactions, and these are people who’ve spent their whole lives being treated like an imperfection that just needs to be smoothed out.
Why would you not be wary?
svcg@lemmy.blahaj.zone
on 07 Aug 23:01
nextcollapse
Roughly 50% of transgender and/or non-binary people are software developers and roughly 50% are furry artists, so it makes sense we would be more wary of AI.
It can be a tiny bit involved to install but if you know your way around Linux already it’s perfectly doable. The arch wiki is a great reference for MANY things and it has a dedicated page with installation instructions.
I like that it’s lightweight because it comes with the bare minimum for a working Linux install and everything on top of that must be explicitly installed by you. I also love pacman (the package manager). It’s never borked anything for me and I’ve yet to be dropped into a dependency hell in 6+ years of using it.
Well, some are both software developers and furry artists, which I guess frees you up to be some other, hidden, third thing. Which I’m going to guess is either bass guitar player or train driver?
roofuskit@lemmy.world
on 07 Aug 23:10
nextcollapse
Oppressed people don’t like the walled garden information tools made and profited from by the people using them as a scapegoat distraction for their fleecing of society?
idiomaddict@lemmy.world
on 07 Aug 23:53
nextcollapse
••~intersectionality*-*_
Formfiller@lemmy.world
on 08 Aug 00:22
nextcollapse
I think a lot of women feel this way too
augustus@sh.itjust.works
on 08 Aug 00:23
nextcollapse
Yes, generative AI is a normative neurotypical triangulation machine. Why would this be thought of favorably?
I think it’s because the average person doesn’t understand about five words in your first sentence. They can understand marketing bull that they’re fed, though.
Why the FUCK do we need to start splintering this with identity politics? Seriously name one good reason why this isn’t a distraction from the class war. Just one.
Bubbaonthebeach@lemmy.ca
on 08 Aug 02:18
nextcollapse
I’m none of those however I believe they are right to view it negatively. The rest of us should be just a wary.
Bane_Killgrind@lemmy.dbzer0.com
on 08 Aug 02:34
nextcollapse
outliers badly served by advanced averaging machine
Who knew
SoleInvictus@lemmy.blahaj.zone
on 08 Aug 02:56
nextcollapse
Disabled, vehemently anti-AI enby here. The only thing I’m good at professionally is being a great big brain, so taking knowledge work away from me makes me angry.
Tollana1234567@lemmy.today
on 08 Aug 02:56
nextcollapse
AI is the new crypto by the ceos and c-suites, sorry but theres no market for it for a regular customer base, and they admitted its costing them alot more money using AI than actually saving or even profitting from it. its actually no wonder the people who fall for AI /crypto are mostly conservatives.
vacuumflower@lemmy.sdf.org
on 08 Aug 06:15
nextcollapse
who fall for AI /crypto are mostly conservatives
Let’s separate these two things.
The latter does work well enough to be used by a kind of people. That it’s not the new revolution is fine. I’ve recently looked through NOSTR NIPs and they make a huge thing out of functionality for sending “zaps”, and you know why? Because payments mean possibility to send universal value for some subjective value. It’s a difference in efficiency between barter and money almost.
I can’t be proud of it, because, despite sharing libertarian ideas, I was highly skeptical of such systems. One can say I was gaslighted into considering it all having become a scam.
So - NOSTR looks like something that will work. Its standards involve a lot of different functionality, so clients usually decide to implement only part of it - some like Reddit\Lemmy communities, some like Telegram group chats, and so on (it kinda seems to even out with time, Amethyst has recently got group chats, for example). And thus it often seems devoid of life for new people. But it’s already big enough for the search results to not seem particularly right-wing skewed.
So - I’ve noticed that people very often send these “zaps”. It’s normal to tip stuff in NOSTR. Already.
It’s a long-term advantage, but that system in its architecture is far better than Fediverse, that’s what I mean.
And honestly it’s not unheard of for left-wing technical projects to use good tooling and competent people and appear impressive, but long-term lose to right-wing technical projects which use some tooling and some people and don’t appear too cool, but are more applicable socially.
I really feel like trying to write a NOSTR client, LOL.
It has a market for a regular customer base. But they try to shove it down everything just to see what sticks, and most of those things are useless at best or actively making the product worse.
NoodlePoint@lemmy.world
on 08 Aug 03:04
nextcollapse
Because some are artisans and see that their work is being pillaged for AI “training”.
“Some groups like bullshit less than others” says survey.
“This is why bullshit is bad.” says author. “Here’s my post-hoc reasoning for why I got these results.”
Whoever creates this stuff has no idea what pain is whatsoever. I am utterly disgusted. If you really want to make creepy stuff, you can go ahead and do it. I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself.
vacuumflower@lemmy.sdf.org
on 08 Aug 06:00
collapse
It’s mathematically an insult to life itself. It changes evolution in human societies to reduce dissent and diversity of thought. And evolution is important in the sense that to stay on one place you have to run very fast.
So it’s sort of a tool for regress. Honestly - similar to the Web itself. It was intended as a hypertext system for scientists. For social interaction there were e-mail and e-news.
I’m thinking - I thought always that Sun is a very cool company, but at the same time they are also the ones who’ve popularized this messy understanding of the future in which, with some commercial adjustments by today’s big tech, we still live. And that understanding was highly centralist, sort of a digital empire.
Curious_Canid@lemmy.ca
on 08 Aug 03:46
nextcollapse
Actual computer scientists should also be included with those groups.
itslilith@lemmy.blahaj.zone
on 08 Aug 10:59
collapse
LGBTC
andros_rex@lemmy.world
on 08 Aug 09:45
nextcollapse
Knowledge based fields were historically a “safe space” for queer and disabled people. If you are just super fucking smart and could be a wizard in a programming language, or were a genius physicist, you could get to the point where you were too valuable to fire for being trans or disabled. I may be trans and an unperson in the place I live, but I can do calculus, and there’s no way they can take that away from me.
There’s an attack on knowledge itself going on right now. A desire by the rich to control information. They want to force us into an unreality where skill and knowledge are meaningless. This hurts people who are socially marginalized, because it takes away one of our few paths for economic survival.
It goes with the attacks on DEI. What they want is a tool that can replace the need for talent, so that they can select who gets to have jobs. They want all jobs to be Graeber’s “bullshit jobs” so that skill is meaningless and they can allot them out to the people they think “deserve” them.
SoftestSapphic@lemmy.world
on 08 Aug 22:23
collapse
One that divides people in lower tax brackets
JigglySackles@lemmy.world
on 08 Aug 14:49
nextcollapse
Man between this and the “AI vegan” bullshit article, they really want to get ahead on crushing any thought that AI is bad. At least by easily manipulated groups.
DragonTypeWyvern@midwest.social
on 08 Aug 14:56
collapse
Like the kind of person that thinks vegans and trans = easily manipulated, because they’ll never consider the point might be manipulating their own biases against those groups.
prettybunnys@sh.itjust.works
on 08 Aug 15:47
nextcollapse
I think it’s also in an effort to other those who are against AI as it’s being done rn
JigglySackles@lemmy.world
on 08 Aug 23:59
collapse
Yep. Making them an outcast class ahead of any firm resistance.
JigglySackles@lemmy.world
on 08 Aug 23:58
collapse
Yes exactly. They are manipulating the “unwashed masses” that think anything different from them is bad. Bro Jogan die hards etc.
Lucidlethargy@sh.itjust.works
on 08 Aug 16:34
nextcollapse
Opposed people, or today who know what it’s like to be oppressed, are more likely to recognize oppression.
AI is going to fuck us all at this rate. It’s already begun. People are losing their jobs.
mycelium_underground@lemmy.world
on 08 Aug 18:05
collapse
Losing jobs is just the tip of the iceberg, individualized, friendly mass manipulation is where the shit hits the fan in a whole new way
Electricd@lemmybefree.net
on 08 Aug 17:34
nextcollapse
threaded - newest
Only corporate executives benefit from AI.
Everyone else is harmed, both directly and indirectly, and it makes the customer experience far worse because workers are replaced by chatbots that are incapable of understanding.
From what I’ve seen, the only people that have a positive view of AI are those who see themselves as the master of others. The trans, nonbinary, & disabled people in this study are very unlikely to fit that mold.
Sorry cis male here I find AI negatively. This article is bullshit trying to separate us. We should all be appalled by AI.
The survey concludes that transgender, non-binary and disabled people tend to view AI more negatively than others. Nothing about this statistical result implies that there aren’t plenty of people in other groups, like you, who view AI negatively. It makes a claim about statistical trends.
If you read a report on statistical trends and repond “That’s bullshit! It doesn’t describe me!” it suggests you’re missing the point of surveys and statistics. The survey is quite compatible with many non-disabled cis men disliking AI a lot.
Its a peice designed to position those people as less intelligent. Same type if bullshit as in the 20s - 70s
Not “more negatively”. A higher proportion of that group view it the same amount of negative.
“more likely”
You’re probably overthinking this. To me it shows that this technology, which is often advertised as a tool to level the playing field and “democratize” industries is viewed as doing the exact opposite. Those who should benefit the most from it feel threatened and it looks like we’re moving closer to techno feudalism by the day as LLMs are squeezed into everything without thought. Fascists tend to love AI, use AI and advance AI. That makes the technology a natural enemy of many minorities.
Plenty of non-chatbots applications that do help the regular user though. Even if most of it is corporate bullshit, some of it isn’t.
Like for example VLC using it for subtitles on any video you have, in any language you want and automatically synced correctly. That they hallucinate a couple of times doesn’t really matter in that context.
Trans lady here, appalled by AI! A lot of the middle management I work with are eager for it, and since I work in M365 administration, my boss keeps compelling me to flip the CoPilot switch to “on” for people.
I hate it.
Human being checking in here, I am appalled by the current usage of AI.
This study is bullshit.
Why is a statistical survey bullshit because of your personal view on the matter? Where does the survey imply that transgender, nonbinary and disabled people are the only ones who dislike AI?
<img alt="" src="https://lemmy.ca/pictrs/image/b1773caf-ba76-4031-a003-fa3c7a28d10d.png">
The graphic shows that every group has attitudes that are somewhere between completely negative and completely positive. The groups mentioned are just a bit more negative than the others.
Because it singles people out for no reason. There is absolutely no reason to do a study like this that focuses on marginalized groups. Does this study make these marginalized groups lives better somehow by putting this information out there? Not a chance.
Research for the sake of doing research is assinine, and its rampant in academia. We have a publish or perish attitude in academia that is so pervasive its sickening…ask me how I know that (my partner is a professor)
And we basically all but force people to write papers and try to come up with novelty to justify their existence as a professor.
AI is a scourge on this earth in how we use it today. We didnt need a study to tell us that, much less to single out a few groups of people, who frankly dont need to be singled out anymore than they already have been by the Trump administration.
I mean, would you not want to do this specifically to see it’s effects on marginalized groups? That seems like a pretty good reason to me.
Admittedly, I didnt read the article. I think the research is actually beneficial after reading the article, and its exactly the kind of research I think should be done on AI.
Spoke prematurely based on the headline, go figure…
Props to you for admitting you spoke prematurely
The whole thing is done is bad faith to make a correlation that isn’t there. I just conducted a study that says people are always cats. My study doesn’t show any actual correlation but I think I once heard that a cat man exists so there is potential for study.
It does feel a bit like the magazine is gunning for the “Don’t like AI? What are you, queer?” angle.
The article contains nothing of the sort and I have no idea why you came to that conclusion.
I’ve seen various iterations of this column a thousand times before. The underlying message is always “AI is going to get shoved down your throat one way or another, so let’s talk about how to make it more palpable.”
The author (and, I’m assuming there’s a human writing this, but its hardly a given) operates from the assumption that
but fails to consider that the problem is employing a rigid, impersonal, digital tool to engage with a non-uniform human population. The question ultimately being asked is how to get a square peg through a round hole. And while the language is soft and squishy, the conclusions remain as authoritarian and doctrinaire as anything else out of the Silicon Valley playbook.
This is a reasonable point, but it's also not what you said previously.
I’m a cis white autistic girl I should say a 7 Thanks for the sharing very interesting
“more likely”
I’m none of those things and so-called AI is utter shit.
That's interesting. I feel like a lone voice in my university, trying to explain to people that using LLMs to do research tasks isn't a good idea for several reasons, but I'd never imagine that being disabled would put me into a group more likely to think like that. If I had to guess, I'd suggest that there's possibly a strong network effect being abused in our social environment to make people get into the AI hype, and we, the ones who live less connected to the "standard" social norms, tend to become less vulnerable to it.
It may also be that disabled, transgender and nonbinary people are more aware of:
As an autistic bastard I just think it’s shit, though I will say that I do partake in the guilty pleasure of Two Scuffed and DougDoug. But I wouldn’t feel particularly bad if every bit of generative AI spontaneously corrupted and could never be replicated, generally feels like a money sink for dipshit corporations at this point.
God, the number of people here who don’t know what “more likely” means is insane. Just because you aren’t trans, enby or disabled doesn’t mean the study is bullshit because you hate AI. It means that if you walk up to a random person and ask them about AI, they’re more likely to hate it if they exist in one of those groups.
Secondly, studies like this have value because they can clue people into issues that a community is having. If everyone is neutral about a thing, except for disabled people (who hate it), then maybe that means that the thing is having a disproportionately negative impact on disabled people. Studies like this are not unlike saying “hey, there’s smoke over there, there might be a fire.”
The thing is, EVERYONE hates AI except for a very small number of executives and the few tech people who are falling for the bullshit the same way so many fell for crypto.
It’s like saying a survey indicates that trans people are more likely to hate American ISP’s. Everyone hates them and trans people are underrepresented in the population of ISP shareholders and executives. It doesn’t say anything about the trans community. It doesn’t provide any actionable or useful information.
It’s stating something uninteresting but applying a coat of rainbow paint to try to get clicks and engagement.
You might be living in an echo chamber. Most Americans use AI at least sometimes and plenty use it regularly according to studies.
We could argue all day over who is experiencing reality or who is in an echo chamber.
Pew Research found that US adults who are not “AI Experts” are more likely to view AI as negative and harmful.
We could, or you could read the article where it addresses exactly that point. Most demographics are slightly positive on AI, with some neutral and only nonbinary people as slightly negative. The representative US sample is at 4.5/7.
fedia.io/m/technology@lemmy.world/t/…/11832636
You literally are right here accusing me of being in an echo chamber for thinking Americans view AI negatively, then when I back that up with a source you are now… Claiming that the article says that.
Except that the whole “most demographics are positive on AI” piece that you toss in counters your own countering of my disagreement. You’re talking in circles here.
It’s also worth noting this article is using a sample size of 700 and doesn’t go all that heavily into the methodology. The author describes themself as a “social computing scholar” and states that they purposefully oversampled these minority groups.
The conclusion is nothing but wasted time and clicks. You’re in this thread telling people to “read the article” and I’m in here to warn people that it’s not worth their time to do so.
And this is part of a trend I’ve noticed on Lemmy lately: people posting obviously bad articles, users commenting that the articles are bad, and usually about 3-4 other users in the comments arguing and trying to drive more engagement to the article. More clicks, more ad revenue.
On a tangent, to me as an outsider it seems that most Americans are more likely to view anything as negative. I have no scientific backing for my shitpost though.
It’s hard to be positive here.
No, it’s interesting.
The average person is not informed enough to even be aware of the problems with AI. Look at how aggressively AI is being marketed, and realize that this marketing works.
I wonder if the correlation is that these groups tend to be more informed.
No, it’s that AI has a white male bias.
Which the minority groups are more informed about…
Which is irrelevant
That’s fair, because AI is biased against them.
Possibly… However, AI will be use to discriminate them as America ramps up their concentration camps for the undesirables.
TIL, I am transgender, non binary and disabled. That is why I hate AI slop.
You are committing a logical fallacy called “affirming the consequent”.
Or he’s not pushing a narrative that those individuals are ludfites afraid of tech and are dumber than others?
What’s next defining races by the lumps on thier heads?
Mengala would be proud
My dude, do you know what statistics is? The paper doesn’t say anything of that sort. Measuring the proportion of people who hold a particular belief is nothing like what you describe
My dude. That is not science or statistics. That is having people fill out forms and having answers interpreted. Which begs the biasing questions and wording, which brings in other questions.
I used to work as a person who asked people these questions. It ain’t science, Its targeted questions.
So no. It’s not science or statistics. It’s media metrics.
Hi. Haven’t read the article. Straight middle aged white guy here. I too also view AI negatively.
If trans, nonbinary, or disabled people view AI negatively, it’s not because they’re trans, nonbinary or disabled. It’s because AI is terrible, and threatens (and already is proving to) make all of our lives terrible for the sole sake of giving billionaires a few extra pennies.
Though I will say, if trans, nonbinary and disabled people have any extra issues with AI making their life specifically worse, that’s not caused by AI itself. It’s caused by the wealthy CHOOSING to use AI to make their lives worse.
This doesn’t need to happen. None of this needs to happen. Google doesn’t need entire campuses dedicated to AI with special power requirements. This is all bullshit.
The survey discovered that people in those groups are more likely to view AI negatively than those in other groups.
People in these groups may have different or additional reasons for viewing AI negatively that are not common to other groups. It’s a question for further research why they tend to view AI more negatively. It might very well be because they’re trans, nonbinary or disabled - perhaps for conscious reasons or perhaps because of other factors. The survey shows that there are more questions to be asked and that it would be worth paying attention to the experiences of these groups.
Same way the other way around. Remember when grok went full mechahitler?
it's because we actually listened to the plot of the matrix
ITT: “this study doesn’t say anything interesting about ME, it must be bullshit!!”
Smart bunch it would seem.
Fuck AI
Makes sense.
These systems exist to sand off the rough edges of real life artifacts and interactions, and these are people who’ve spent their whole lives being treated like an imperfection that just needs to be smoothed out.
Why would you not be wary?
Roughly 50% of transgender and/or non-binary people are software developers and roughly 50% are furry artists, so it makes sense we would be more wary of AI.
I use arch, btw.
.
Sysadmin is an adequate pastiche, you don’t need to specify the exact queer animal person they are.
Eeh we’re not all furries it’s more like 1 in 4 of us.
If the remaining three aren’t out, I’m not going to take that choice from them :p
Trans nonby software dev who dated a furry artist, my disdain for AI knows no limits.
I use Nobara, btw. (Is Arch good I’ve never looked into it)
It can be a tiny bit involved to install but if you know your way around Linux already it’s perfectly doable. The arch wiki is a great reference for MANY things and it has a dedicated page with installation instructions.
I like that it’s lightweight because it comes with the bare minimum for a working Linux install and everything on top of that must be explicitly installed by you. I also love pacman (the package manager). It’s never borked anything for me and I’ve yet to be dropped into a dependency hell in 6+ years of using it.
I got in a dependency loop one time. It was my own damn fault 😂
Wait, if I’m not a software developer, then I must be a furry artist 🫣 The things I learn about myself on the internet… xd
Well, some are both software developers and furry artists, which I guess frees you up to be some other, hidden, third thing. Which I’m going to guess is either bass guitar player or train driver?
driving trains sounds interesting, I pick that :D
Oppressed people don’t like the walled garden information tools made and profited from by the people using them as a scapegoat distraction for their fleecing of society?
••~intersectionality*-*_
I think a lot of women feel this way too
Yes, generative AI is a normative neurotypical triangulation machine. Why would this be thought of favorably?
I think it’s because the average person doesn’t understand about five words in your first sentence. They can understand marketing bull that they’re fed, though.
Why the FUCK do we need to start splintering this with identity politics? Seriously name one good reason why this isn’t a distraction from the class war. Just one.
I’m none of those however I believe they are right to view it negatively. The rest of us should be just a wary.
Who knew
Disabled, vehemently anti-AI enby here. The only thing I’m good at professionally is being a great big brain, so taking knowledge work away from me makes me angry.
AI is the new crypto by the ceos and c-suites, sorry but theres no market for it for a regular customer base, and they admitted its costing them alot more money using AI than actually saving or even profitting from it. its actually no wonder the people who fall for AI /crypto are mostly conservatives.
Let’s separate these two things.
The latter does work well enough to be used by a kind of people. That it’s not the new revolution is fine. I’ve recently looked through NOSTR NIPs and they make a huge thing out of functionality for sending “zaps”, and you know why? Because payments mean possibility to send universal value for some subjective value. It’s a difference in efficiency between barter and money almost.
I can’t be proud of it, because, despite sharing libertarian ideas, I was highly skeptical of such systems. One can say I was gaslighted into considering it all having become a scam.
So - NOSTR looks like something that will work. Its standards involve a lot of different functionality, so clients usually decide to implement only part of it - some like Reddit\Lemmy communities, some like Telegram group chats, and so on (it kinda seems to even out with time, Amethyst has recently got group chats, for example). And thus it often seems devoid of life for new people. But it’s already big enough for the search results to not seem particularly right-wing skewed.
So - I’ve noticed that people very often send these “zaps”. It’s normal to tip stuff in NOSTR. Already.
It’s a long-term advantage, but that system in its architecture is far better than Fediverse, that’s what I mean.
And honestly it’s not unheard of for left-wing technical projects to use good tooling and competent people and appear impressive, but long-term lose to right-wing technical projects which use some tooling and some people and don’t appear too cool, but are more applicable socially.
I really feel like trying to write a NOSTR client, LOL.
It has a market for a regular customer base. But they try to shove it down everything just to see what sticks, and most of those things are useless at best or actively making the product worse.
Because some are artisans and see that their work is being pillaged for AI “training”.
“Some groups like bullshit less than others” says survey. “This is why bullshit is bad.” says author. “Here’s my post-hoc reasoning for why I got these results.”
In the words of Miyazaki:
It’s mathematically an insult to life itself. It changes evolution in human societies to reduce dissent and diversity of thought. And evolution is important in the sense that to stay on one place you have to run very fast.
So it’s sort of a tool for regress. Honestly - similar to the Web itself. It was intended as a hypertext system for scientists. For social interaction there were e-mail and e-news.
I’m thinking - I thought always that Sun is a very cool company, but at the same time they are also the ones who’ve popularized this messy understanding of the future in which, with some commercial adjustments by today’s big tech, we still live. And that understanding was highly centralist, sort of a digital empire.
Actual computer scientists should also be included with those groups.
LGBTC
Knowledge based fields were historically a “safe space” for queer and disabled people. If you are just super fucking smart and could be a wizard in a programming language, or were a genius physicist, you could get to the point where you were too valuable to fire for being trans or disabled. I may be trans and an unperson in the place I live, but I can do calculus, and there’s no way they can take that away from me.
There’s an attack on knowledge itself going on right now. A desire by the rich to control information. They want to force us into an unreality where skill and knowledge are meaningless. This hurts people who are socially marginalized, because it takes away one of our few paths for economic survival.
It goes with the attacks on DEI. What they want is a tool that can replace the need for talent, so that they can select who gets to have jobs. They want all jobs to be Graeber’s “bullshit jobs” so that skill is meaningless and they can allot them out to the people they think “deserve” them.
What kinda bullshit narrative is this?
One that divides people in lower tax brackets
Man between this and the “AI vegan” bullshit article, they really want to get ahead on crushing any thought that AI is bad. At least by easily manipulated groups.
Like the kind of person that thinks vegans and trans = easily manipulated, because they’ll never consider the point might be manipulating their own biases against those groups.
I think it’s also in an effort to other those who are against AI as it’s being done rn
Yep. Making them an outcast class ahead of any firm resistance.
Yes exactly. They are manipulating the “unwashed masses” that think anything different from them is bad. Bro Jogan die hards etc.
Opposed people, or today who know what it’s like to be oppressed, are more likely to recognize oppression.
AI is going to fuck us all at this rate. It’s already begun. People are losing their jobs.
Losing jobs is just the tip of the iceberg, individualized, friendly mass manipulation is where the shit hits the fan in a whole new way
Ragebait
Maybe only those are worthy to survive this technology addiction disease.
That’s because they’re all on lemmy
Sounds like confirmarion bias.
I’d like to know the industry sector they are working in.
I’d say a high amount of them work in Tech and IT.
Well that explains a lot about Lemmy’s strong dislike for AI with a burning passion.