In this study, we conducted a survey (n = 742) including a representative U.S. sample and an oversample of gender minorities, racial minorities, and disabled individuals to examine how demographic factors shape AI attitudes.
Thanks for the actual response. Personally I think you sample size is way too low, and the selection is skewed towards people that already feel marginalized, which will in turn, skew your results
I looked into that and the only question I really have is how geographically distributed the samples were. Other than that, It was an oversampled study, so <50% of the people were the control, of sorts. I donāt fully understand how the sampling worked, but there is a substantial chart at the bottom of the study that shows the full distribution of responses. Even with under 1000 people, it seems legit.
Imgonnatrythis@sh.itjust.works
on 03 Jul 04:28
collapse
They checked to see whether or not they had Lemmy accounts.
Lyra_Lycan@lemmy.blahaj.zone
on 02 Jul 23:11
nextcollapse
The trick is for everyone on the seesaw to move as far away as possible from AI, then itāll balance or tilt in favour of the people
magnetosphere@fedia.io
on 02 Jul 23:44
nextcollapse
I donāt blame them for being skeptical. Anything that corporations/rich people are enthusiastic about usually ends up screwing them.
sugar_in_your_tea@sh.itjust.works
on 03 Jul 14:55
collapse
A certain amount of skepticism is healthy, but itās also quite common for people to go overboard and completely avoid a useful thing just because some rich idiot is pushing it. Iāve seen a lot of misinformation here on Lemmy about LLMs because people hate the environment its in (layoffs in the name of replacing people with āAIā), but they completely ignore the merit the tech has (great at summarizing and providing decent results from vague queries). If used properly, LLMs can be quite useful, but people hyper-focus on the negatives, probably because they hate the marketing material and the exceptional cases the news is great at shining a spotlight on.
I also am skeptical about LLMs usefulness, but I also find them useful in some narrow use-cases I have at work. Itās not going to actually replace any of my coworkers anytime soon, but it does help me be a bit more productive since itās yet another option to get me unstuck when I hit a wall.
Just because thereās something bad about something doesnāt make the tech useless. If something gets a ton of funding, thereās probably some merit to it, so turn your skepticism into a healthy quest for truth and maybe youāll figure out how to benefit from it.
For example, the hype around cryptocurrency makes it easy to knee-jerk reject the technology outright, because it looks like itās merely a tool to scam people out of their money. That is partially true, but itās also a tool to make anonymous transactions feasible. Yes, there are scammers out there pushing worthless coins in a pump and dump scheme, but there are also privacy-focused coins (Monero, Z-Cash, etc) that are being used today to help fund activists operating under repressive regimes. Itās also used by people doing illegal things, but hey, so is cash, and privacy coins are basically easier to use cash. We probably wouldnāt have had those w/o Bitcoin, though they use very different technology under the hood to achieve their aims. Maybe theyāre not for you, but they do help people.
Instead of focusing on the bad of a new technology, more people should focus on the good, and then weigh for themselves whether the good is worth the bad. I think in many cases it is, but only if people are sufficiently informed about how to use them to their advantage.
cmnybo@discuss.tchncs.de
on 02 Jul 23:44
nextcollapse
I think just about everyone who is not an executive at a tech company is highly skeptical of AI.
prole@lemmy.blahaj.zone
on 03 Jul 03:04
nextcollapse
Youād hope, and yet Iāve had people on Lemmy give me shit for being overtly anti-llm
CatsGoMOW@lemmy.world
on 03 Jul 03:40
nextcollapse
I hate that itās being shoved into anything and everything right now, but saying youāre āovertly anti-llmā seems a bit over dramatic to me. LLMs are a tool like anything else. Used properly and in the right situation, they can be very helpful.
itsprobablyfine@sh.itjust.works
on 03 Jul 03:51
nextcollapse
Theyāre mostly not being used for that and they come at a huge cost
GnuLinuxDude@lemmy.ml
on 03 Jul 05:13
nextcollapse
Iām overtly anti-llm. I donāt think itās dramatic at all to be so.
Enough has come out about how much power and water datacenters used to train and run it consume, people being driven insane by it, investors hoping to displace jobs with it, how over reliance on it diminishes your mental faculties, people from minors to adults using it to create deepfake porn of minors (literally itās on lemmy rn lemmy.ml/post/32581009), its use in overt misinformation (particularly from our modern warzones and disaster areas), overt theft of writing and artistry to train these things, and last but not least: limitless spam.
Iām affected by most of those things indirectly, but the spam affects me daily. Canāt search for something on the net anymore without being served f-tier LLM-produced garbage.
So what are the good parts? Doesnāt seem like they outweigh these bad parts, whatever they are.
Plebcouncilman@sh.itjust.works
on 03 Jul 14:15
nextcollapse
Most of these arguments were made for computers back when they were gaining popularity fyi.
The people outsourcing their thinking to LLMs werenāt gonna do much thinking in the first place. And honestly once you use them for a while you quickly realize what their good uses are and what are their limitations and thinking is not its strong suit. But itās great at sorting large data and making it digestible. Or writing corpo copy that was devoid of meaning anyways.
Remember that a hammer can kill a person just as well as it can build a house.
Now I agree that it is annoying that it is being shoved into everything without any good reason, but the market will sort that out. What you are seeing is everyone rushing into a nascent market before it ossifies and shakes everyone except one or two winners. In 10 years Iām sure LLMs will be more like you have one that you plug into every service you use and it will be provided by one of a handful of companies who are the only ones capable of profiting from this because of the economies of scales it requires to work. Ergo not very different from every other tech rush that has happened in history.
LLMs are tools, simple as. Being a Luddite, screaming and kicking and crying over them is not gonna make it go away any more than boomers crying over computers have managed to make computers go away.
Your last paragraph implies that Iām naive for believing that complaining about it will make it go away, but Iāve done no such thing.
the market will sort that out
This is the naive statement.
sugar_in_your_tea@sh.itjust.works
on 03 Jul 14:43
collapse
Canāt search for something on the net anymore without being served f-tier LLM-produced garbage.
I donāt see a material difference vs the f-tier human-produced garbage we had before. Garbage content will always exist, which is why itās important to learn to how to filter it.
This is true of LLMs as well: they can and do produce garbage, but they can and are useful alternatives to existing tech. I donāt use them exclusively, but as an alternative when traditional search or whatever isnāt working, theyāre quite useful. They provide rough summaries about things that I can usually easily verify, and they produce a bunch of key words that can help refine my future searches. I use them a handful of times each week and spend more time using traditional search and reading full articles, but I do find LLMs to be a useful tool in my toolbox.
I also am frustrated by energy use, but itās one of those things that will get better over time as the LLM market matures from a gold rush into established businesses that need to actually make money. The same happens w/ pretty much every new thing in tech, thereās a ton of waste until the product finds its legs and then becomes a lot more efficient.
Remember how a few years ago 3d displays and VR were being shoved in everyoneās faces? I can see the current āAIā trend going the same way.
sugar_in_your_tea@sh.itjust.works
on 03 Jul 14:33
collapse
VR is still cool and will probably always be cool, but I doubt itāll never be mainstream. 3D was just awkward, and they really just wanted VR but the tech wasnāt there yet.
I own neither, yet Iāve been considering VR for a few years now, just waiting for more headsets to have proper Linux support before I get one.
Likewise, Iām not paying for LLMs, but I do use the ones my workplace provides. Theyāre useful sometimes, and itās nice to have them as an option when I hit a wall or something. I think theyāre interesting and useful, but not nearly as powerful as the big corporations want you to think.
ijedi1234@sh.itjust.works
on 03 Jul 04:21
nextcollapse
My problem with LLMs is that theyāre expert pattern matchers and little else.
Ask them the integral from 1-5 of ln(x) and theyāre sure to screw it up.
Theyāll give you something that sounds like the right answer, but their explanations are nonsense.
Bonesince1997@lemmy.world
on 03 Jul 05:00
nextcollapse
Cold readings, like a psychic, is how I recently heard them referred to as.
Exactly⦠I advise anyone with some kind of expertise to ask chat gpt some questions about your specific field, and see how accurate it is⦠Then try to ever believe it about anything else ever again.
sugar_in_your_tea@sh.itjust.works
on 03 Jul 14:30
nextcollapse
Thereās a difference between healthy skepticism and invalid, knee-jerk opposition.
LLMs are a useful tool sometimes, and I use them for refining general ideas into specific things to research, and theyāre pretty good at that. Sure, what they output isnāt trustworthy on its own, but I can pretty easily verify most of what it spits out, and it does a great job of spitting out a lot of stuff thatās related to what I asked.
For example, Iām a SW dev, so Iāll often ask it stuff like, ācompare and contrast popular projects that do Xā, and itāll find a few for me and give easily-verifiable details about each one. Sometimes itās wrong on one or two details, but it gives me enough to decide which ones I want to look more deeply into. Or Iāll do some greenfield research into a topic Iām not familiar with, and it does a fantastic job of pulling out keywords and other domain-specific stuff that help refine what I search for.
LLMs do a lot less than their proponents claim, but they also do a lot more than detractors claim. Theyāre a useful tool if you understand the limitations and have a rough idea of how they work. Theyāre a terrible tool if you buy into the BS coming from the large corps pushing them. I will absolutely push back against people on both extremes.
HubertManne@piefed.social
on 03 Jul 14:44
collapse
I mean there is place in between highly skeptical and anti. I think its a faster and more convenient search as long as it gives sources and it makes creating and editing media easier. I don't like the energy usage and do like work bringing that down. Its just trying to get it to solve things on its own that seems to be pushed when we can clearly see it not working when used like that. I think the biggest issue is its crammed in as a solution and it works in the most half assed manner and they want to say that fine.
tiredofsametab@fedia.io
on 03 Jul 03:54
nextcollapse
We have a lot of non-management whom are all-in and drinking the kool-ade. I'm still highly put off for a number of reasons, but an outlier.
HubertManne@piefed.social
on 03 Jul 14:40
collapse
I was just trying to figure out how to express that exact sentiment. Thank you.
ploot@lemmy.blahaj.zone
on 03 Jul 01:05
nextcollapse
Makes sense given that AI has been trained on all the prejudiced blatherings of humanity so far, and it just tries to imitate what it has seen. Yet itās being used to make decisions as if itās some wise oracle.
SunshineJogger@feddit.org
on 03 Jul 05:37
nextcollapse
I use AI daily and find it useful as a tool. Its also frustrating in its current state. The disgusting default buttlick responses, trying to please the user with fake polite drool.
And then the many, many mistakes.
And itās a new tool, so yea it need to ripenā¦
And that means to go all in on a company strategic level of AI as a technology is dumb.
When building a product the problem the product solves is to be the center of the work. Not the technology used to achieve the solution.
Iāve got some bad news for you. They will never fix the mistakes as it cannot reason, it has no actual intelligence. LLMs are already plateauing and are miles away from being trustworthy. And they steal copyrighted work every request
OpenPassageways@programming.dev
on 03 Jul 11:31
collapse
As skeptical as I am, Iām feeling pressure to join the BS train on this. Itās literally all over LinkedIn⦠Even though Iām sure itās all mostly bullshit, it doesnāt matter that I think. What matters is that this is where billionaires are dumping their money so I need to be in a position to get some of it or I may not be able to be gainfully employed in 10 years.
threaded - newest
𤣠š¤£
Guess I must be one of those āmarginalizedāā¦
š¤£
Proper headline:
āIntelligent People Understand the Limits and Dangers of AI; Unfortunately AI Company Leaders Do Not, and Seek to Silence Oppositionā
.
How do they define āmarginalizedā?
Thanks for the actual response. Personally I think you sample size is way too low, and the selection is skewed towards people that already feel marginalized, which will in turn, skew your results
I looked into that and the only question I really have is how geographically distributed the samples were. Other than that, It was an oversampled study, so <50% of the people were the control, of sorts. I donāt fully understand how the sampling worked, but there is a substantial chart at the bottom of the study that shows the full distribution of responses. Even with under 1000 people, it seems legit.
They checked to see whether or not they had Lemmy accounts.
The trick is for everyone on the seesaw to move as far away as possible from AI, then itāll balance or tilt in favour of the people
I donāt blame them for being skeptical. Anything that corporations/rich people are enthusiastic about usually ends up screwing them.
A certain amount of skepticism is healthy, but itās also quite common for people to go overboard and completely avoid a useful thing just because some rich idiot is pushing it. Iāve seen a lot of misinformation here on Lemmy about LLMs because people hate the environment its in (layoffs in the name of replacing people with āAIā), but they completely ignore the merit the tech has (great at summarizing and providing decent results from vague queries). If used properly, LLMs can be quite useful, but people hyper-focus on the negatives, probably because they hate the marketing material and the exceptional cases the news is great at shining a spotlight on.
I also am skeptical about LLMs usefulness, but I also find them useful in some narrow use-cases I have at work. Itās not going to actually replace any of my coworkers anytime soon, but it does help me be a bit more productive since itās yet another option to get me unstuck when I hit a wall.
Just because thereās something bad about something doesnāt make the tech useless. If something gets a ton of funding, thereās probably some merit to it, so turn your skepticism into a healthy quest for truth and maybe youāll figure out how to benefit from it.
For example, the hype around cryptocurrency makes it easy to knee-jerk reject the technology outright, because it looks like itās merely a tool to scam people out of their money. That is partially true, but itās also a tool to make anonymous transactions feasible. Yes, there are scammers out there pushing worthless coins in a pump and dump scheme, but there are also privacy-focused coins (Monero, Z-Cash, etc) that are being used today to help fund activists operating under repressive regimes. Itās also used by people doing illegal things, but hey, so is cash, and privacy coins are basically easier to use cash. We probably wouldnāt have had those w/o Bitcoin, though they use very different technology under the hood to achieve their aims. Maybe theyāre not for you, but they do help people.
Instead of focusing on the bad of a new technology, more people should focus on the good, and then weigh for themselves whether the good is worth the bad. I think in many cases it is, but only if people are sufficiently informed about how to use them to their advantage.
I think just about everyone who is not an executive at a tech company is highly skeptical of AI.
Youād hope, and yet Iāve had people on Lemmy give me shit for being overtly anti-llm
I hate that itās being shoved into anything and everything right now, but saying youāre āovertly anti-llmā seems a bit over dramatic to me. LLMs are a tool like anything else. Used properly and in the right situation, they can be very helpful.
Theyāre mostly not being used for that and they come at a huge cost
Iām overtly anti-llm. I donāt think itās dramatic at all to be so.
Enough has come out about how much power and water datacenters used to train and run it consume, people being driven insane by it, investors hoping to displace jobs with it, how over reliance on it diminishes your mental faculties, people from minors to adults using it to create deepfake porn of minors (literally itās on lemmy rn lemmy.ml/post/32581009), its use in overt misinformation (particularly from our modern warzones and disaster areas), overt theft of writing and artistry to train these things, and last but not least: limitless spam.
Iām affected by most of those things indirectly, but the spam affects me daily. Canāt search for something on the net anymore without being served f-tier LLM-produced garbage.
So what are the good parts? Doesnāt seem like they outweigh these bad parts, whatever they are.
Most of these arguments were made for computers back when they were gaining popularity fyi.
The people outsourcing their thinking to LLMs werenāt gonna do much thinking in the first place. And honestly once you use them for a while you quickly realize what their good uses are and what are their limitations and thinking is not its strong suit. But itās great at sorting large data and making it digestible. Or writing corpo copy that was devoid of meaning anyways.
Remember that a hammer can kill a person just as well as it can build a house.
Now I agree that it is annoying that it is being shoved into everything without any good reason, but the market will sort that out. What you are seeing is everyone rushing into a nascent market before it ossifies and shakes everyone except one or two winners. In 10 years Iām sure LLMs will be more like you have one that you plug into every service you use and it will be provided by one of a handful of companies who are the only ones capable of profiting from this because of the economies of scales it requires to work. Ergo not very different from every other tech rush that has happened in history.
LLMs are tools, simple as. Being a Luddite, screaming and kicking and crying over them is not gonna make it go away any more than boomers crying over computers have managed to make computers go away.
Your last paragraph implies that Iām naive for believing that complaining about it will make it go away, but Iāve done no such thing.
This is the naive statement.
I donāt see a material difference vs the f-tier human-produced garbage we had before. Garbage content will always exist, which is why itās important to learn to how to filter it.
This is true of LLMs as well: they can and do produce garbage, but they can and are useful alternatives to existing tech. I donāt use them exclusively, but as an alternative when traditional search or whatever isnāt working, theyāre quite useful. They provide rough summaries about things that I can usually easily verify, and they produce a bunch of key words that can help refine my future searches. I use them a handful of times each week and spend more time using traditional search and reading full articles, but I do find LLMs to be a useful tool in my toolbox.
I also am frustrated by energy use, but itās one of those things that will get better over time as the LLM market matures from a gold rush into established businesses that need to actually make money. The same happens w/ pretty much every new thing in tech, thereās a ton of waste until the product finds its legs and then becomes a lot more efficient.
Remember how a few years ago 3d displays and VR were being shoved in everyoneās faces? I can see the current āAIā trend going the same way.
VR is still cool and will probably always be cool, but I doubt itāll never be mainstream. 3D was just awkward, and they really just wanted VR but the tech wasnāt there yet.
I own neither, yet Iāve been considering VR for a few years now, just waiting for more headsets to have proper Linux support before I get one.
Likewise, Iām not paying for LLMs, but I do use the ones my workplace provides. Theyāre useful sometimes, and itās nice to have them as an option when I hit a wall or something. I think theyāre interesting and useful, but not nearly as powerful as the big corporations want you to think.
My problem with LLMs is that theyāre expert pattern matchers and little else.
Ask them the integral from 1-5 of ln(x) and theyāre sure to screw it up.
Theyāll give you something that sounds like the right answer, but their explanations are nonsense.
Cold readings, like a psychic, is how I recently heard them referred to as.
Exactly⦠I advise anyone with some kind of expertise to ask chat gpt some questions about your specific field, and see how accurate it is⦠Then try to ever believe it about anything else ever again.
Thereās a difference between healthy skepticism and invalid, knee-jerk opposition.
LLMs are a useful tool sometimes, and I use them for refining general ideas into specific things to research, and theyāre pretty good at that. Sure, what they output isnāt trustworthy on its own, but I can pretty easily verify most of what it spits out, and it does a great job of spitting out a lot of stuff thatās related to what I asked.
For example, Iām a SW dev, so Iāll often ask it stuff like, ācompare and contrast popular projects that do Xā, and itāll find a few for me and give easily-verifiable details about each one. Sometimes itās wrong on one or two details, but it gives me enough to decide which ones I want to look more deeply into. Or Iāll do some greenfield research into a topic Iām not familiar with, and it does a fantastic job of pulling out keywords and other domain-specific stuff that help refine what I search for.
LLMs do a lot less than their proponents claim, but they also do a lot more than detractors claim. Theyāre a useful tool if you understand the limitations and have a rough idea of how they work. Theyāre a terrible tool if you buy into the BS coming from the large corps pushing them. I will absolutely push back against people on both extremes.
I mean there is place in between highly skeptical and anti. I think its a faster and more convenient search as long as it gives sources and it makes creating and editing media easier. I don't like the energy usage and do like work bringing that down. Its just trying to get it to solve things on its own that seems to be pushed when we can clearly see it not working when used like that. I think the biggest issue is its crammed in as a solution and it works in the most half assed manner and they want to say that fine.
We have a lot of non-management whom are all-in and drinking the kool-ade. I'm still highly put off for a number of reasons, but an outlier.
I was just trying to figure out how to express that exact sentiment. Thank you.
Makes sense given that AI has been trained on all the prejudiced blatherings of humanity so far, and it just tries to imitate what it has seen. Yet itās being used to make decisions as if itās some wise oracle.
I use AI daily and find it useful as a tool. Its also frustrating in its current state. The disgusting default buttlick responses, trying to please the user with fake polite drool. And then the many, many mistakes.
And itās a new tool, so yea it need to ripenā¦
And that means to go all in on a company strategic level of AI as a technology is dumb.
When building a product the problem the product solves is to be the center of the work. Not the technology used to achieve the solution.
Iāve got some bad news for you. They will never fix the mistakes as it cannot reason, it has no actual intelligence. LLMs are already plateauing and are miles away from being trustworthy. And they steal copyrighted work every request
There it is. Reason. Machines canāt reason. Not one. They can fake it. They can mimic. But they cannot reason and never will
All Americans are, ya nitwits
As skeptical as I am, Iām feeling pressure to join the BS train on this. Itās literally all over LinkedIn⦠Even though Iām sure itās all mostly bullshit, it doesnāt matter that I think. What matters is that this is where billionaires are dumping their money so I need to be in a position to get some of it or I may not be able to be gainfully employed in 10 years.