Why so much hate toward AI?
from danzabia@infosec.pub to technology@lemmy.world on 10 Jun 06:37
https://infosec.pub/post/29676989

I’'m curious about the strong negative feelings towards AI and LLMs. While I don’t defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

#technology

threaded - newest

Kyrgizion@lemmy.world on 10 Jun 06:40 next collapse

Because the goal of “AI” is to make the grand majority of us all obsolete. The billion-dollar question AI is trying to solve is “why should we continue to pay wages?”. That is bad for everyone who isn’t part of the owner class. Even if you personally benefit from using it to make yourself more productive/creative/… the data you input can and WILL eventually be used against you.

If you only self-host and know what you’re doing, this might be somewhat different, but it still won’t stop the big guys from trying to swallow all the others whole.

iopq@lemmy.world on 10 Jun 08:27 next collapse

Reads like a rant against the industrial revolution. “The industry is only concerned about replacing workers with steam engines!”

Kyrgizion@lemmy.world on 10 Jun 09:29 next collapse

You’re probably not wrong. It’s definitely along the same lines… although the repercussions of this particular one will be infinitely greater than those of the industrial revolution.

Also, industrialization made for better products because of better manufacturing processes. I’m by no means sure we can say the same about AI. Maybe some day, but today it’s just “an advanced dumbass” considering most real world scenarios.

chloroken@lemmy.ml on 10 Jun 13:50 next collapse

Read ‘The Communist Manifesto’ if you’d like to understand in which ways the bourgeoisie used the industrial revolution to hurt the proletariat, exactly as they are with AI.

iopq@lemmy.world on 12 Jun 06:55 collapse

The industrial revolution is what made socialism possible, since now a smaller amount of workers can support the elderly, children, etc.

Just look at China before and after industrializing. Life expectancy way up, the government can provide services like public transit and medicine (for a nominal fee)

chloroken@lemmy.ml on 12 Jun 14:42 collapse

We’re discussing how industry and technology are used against the proletariat, not how state economies form. You can read the pamphlet referenced in the previous post if you’d like to understand the topic at hand.

jrgn@lemmy.world on 10 Jun 20:49 collapse

You should check out this thenib.com/im-a-luddite/

Mrkawfee@lemmy.world on 10 Jun 12:48 collapse

the data you input can and WILL eventually be used against you.

Can you expand further on this?

Kyrgizion@lemmy.world on 10 Jun 13:30 collapse

User data has been the internet’s greatest treasure trove since the advent of Google. LLM’s are perfectly set up to extract the most intimate data available from their users (“mental health” conversations, financial advice, …) which can be used against them in a soft way (higher prices when looking for mental health help) or they can be used to outright manipulate or blackmail you.

Regardless, there is no scenario in which the end user wins.

fullsquare@awful.systems on 10 Jun 13:56 collapse

For slightly earlier instance of it, there’s also real time bidding

Illecors@lemmy.cafe on 10 Jun 06:41 next collapse

There is no AI.

What’s sold as an expert is actually a delusional graduate.

EgoNo4@lemmy.world on 10 Jun 06:41 next collapse

Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution?

Both.

Cosmonauticus@lemmy.world on 10 Jun 06:52 next collapse

I can only speak as an artist.

Because it’s entire functionality is based on theft. Companies are stealing the works of ppl and profiting off of it with no payment to the artists who’s works its platform is based on.

You often hear the argument that all artists borrow from others but if I created an anime that is blantantly copying the style of studio Ghibili I’d rightly be sued. On top of that AI is copying so obviously it recreates the watermarks from the original artists.

Fuck AI

Even_Adder@lemmy.dbzer0.com on 11 Jun 10:47 collapse

You can’t be sued over or copyright styles. Studio Ponoc is made up of ex-Ghibli staff, and they have been releasing moves for a while. Stop spreading misinformation.

www.imdb.com/title/tt16369708/

www.imdb.com/title/tt15054592/

www.imdb.com/title/tt8223844/

www.imdb.com/title/tt6336356/

DmMacniel@feddit.org on 10 Jun 07:00 next collapse

AI companies need constantly new training data and straining open infrastructure with high volume requests. While they take everything out of others work they don’t give anything back. It’s literally asocial behaviour.

TheLeadenSea@sh.itjust.works on 10 Jun 07:10 collapse

What do you mean, they give open weights models back that anyone can use. Only the proprietary corporate AI is exploitative.

DmMacniel@feddit.org on 10 Jun 08:03 collapse

Cool everyone can use the website they scraped the data from already.

Also anyone can use open weights models? Even those without beefy systems? Please…

Boomkop3@reddthat.com on 10 Jun 07:05 next collapse

It’s easy to deny it’s built on stolen content and difficult to prove. And AI companies know this, and have gotten caught stealing shitty drawings from children and buying user data that should’ve been private

Dojan@pawb.social on 10 Jun 07:42 collapse

It’s honestly ridiculous too. Imagine saying that your whole business model is shooting people, and if you’re not allowed to shoot people then it’ll crash. So when accused of killing people, you go “nu uh” and hide the weapons you did it with, and the legal system is okay with that.

It’s all so stupid.

Engywuck@lemm.ee on 10 Jun 07:10 next collapse

Karma farming, as everything on any social network, be it centralized or decentralized. I’m not exactly enthusiastic about AI, but I can tell it has its use case (with caution). AI itself is not the problem. Most likely, Corps behind it are (their practices are not always transparent).

KeepFlying@lemmy.world on 10 Jun 07:25 next collapse

On top of everything else people mentioned, it’s so profoundly stupid to me that AI is being pushed to take my summary of a message and turn it into an email, only for AI to then take those emails and spit out a summary again.

At that point just let me ditch the formality and send over the summary in the first place.

But more generally, I don’t have an issue with “AI” just generative AI. And I have a huge issue with it being touted as this Oracle of knowledge when it isn’t. It’s dangerous to view it that way. Right now we’re “okay” at differentiating real information from hallucinations, but so many people aren’t and it will just get worse as people get complacent and AI gets better at hiding.

Part of this is the natural evolution of techology and I’m sure the situation will improve, but it’s being pushed so hard in the meantime and making the problem worse.

The first Chat GPT models were kept private for being too dangerous, and they weren’t even as “good” as the modern ones. I wish we could go back to those days.

INeedMana@lemmy.world on 10 Jun 10:48 collapse

At that point just let me ditch the formality and send over the summary in the first place.

A tangent a little bit but so much this. Why haven’t we normalized using fewer words already?
Why do we keep writing (some blogs and all of content marketing) whole screens of text to convey just a sentence of real content?
Why do we keep the useless hello and regards instead of just directly getting to the points already?

INeedMana@lemmy.world on 10 Jun 07:29 next collapse

Wasn’t there the same question here yesterday?

hendrik@palaver.p3x.de on 10 Jun 07:44 collapse

Yes. https://infosec.pub/post/29620772

Seems someone deleted it, and now we have to discuss the same thing again.

INeedMana@lemmy.world on 10 Jun 08:05 collapse

According to modlog it was against Rule#2

troed@fedia.io on 10 Jun 07:48 next collapse

Especially in coding?

Actually, that's where they are the least suited. Companies will spend more money on cleaning up bad code bases (not least from a security point of view) than is gained from "vibe coding".

Audio, art - anything that doesn't need "bit perfect" output is another thing though.

ZILtoid1991@lemmy.world on 10 Jun 08:07 next collapse

There’s also the issue of people now flooding the internet with AI generated tutorials and documentation, making things even harder. I managed to botch the Linux on my Raspberry Pi so hard I couldn’t fix it easily, all thanks to a crappy AI generated tutorial on adding to path that I didn’t immediately spot.

With art, it can’t really be controlled enough to be useful for anything much beyond spam machine, but spammers only care about social media clout and/or ad revenue.

fullsquare@awful.systems on 10 Jun 08:44 collapse

and also chatbot-generated bug reports (like curl) and entire open source projects (i guess for some stupid crypto scheme)

fullsquare@awful.systems on 10 Jun 08:53 collapse

But but, now idea man can vibecode. this shit destroys separation between management and codebase making it perfect antiproductivity tool

IsaamoonKHGDT_6143@lemmy.zip on 10 Jun 07:51 next collapse

As several have already explained their questions, I will clarify some points.

Not all countries consider AI training using copyrighted material as theft. For example, Japan has allowed AI to be trained with copyrighted material since 2019, and it’s strange because that country is known for its strict laws in that regard.

Also, saying that AI can’t or won’t harm society sells. Although I don’t deny the consequences of this technology. But it will only be effective if AI doesn’t get better, because then it could be counterproductive.

ZILtoid1991@lemmy.world on 10 Jun 08:00 next collapse

My main gripes are more philosophical in nature, but should we automate away certain parts of the human experience? Should we automate art? Should we automate human connections?

On top of these, there’s also the concern of spam. AI is quick enough to flood the internet with low-effort garbage.

Dr_Nik@lemmy.world on 10 Jun 08:34 collapse

The industrial revolution called, they want their argument against the use of automated looms back.

ZILtoid1991@lemmy.world on 10 Jun 09:23 collapse

The capitalists owning the AI thanking you for fighting on their side.

Dr_Nik@lemmy.world on 10 Jun 10:24 collapse

Lots of assumptions there. In case you actually care, I don’t think any one company should be allowed to own the base system that allows AI to function, especially if it’s trained off of public content or content owned by other groups, but that’s kind of immaterial here. It seems insane to villainize a technology because of who might make money off of it. These are two separate arguments (and frankly, they historically have the opposite benefactors from what you would expect).

Prior to the industrial revolution, weaving was done by hand, making all cloth expensive or the result of sweatshops (and it was still comparatively expensive as opposed to today). Case in point, you can find many pieces of historical worker clothing that was specifically made using every piece of a rectangular piece of fabric because you did not want to waste any little bit (today it’s common for people to throw any scraps away because they don’t like the section of pattern).

With the advent of automated looms several things happened:

  • the skilled workers who could operate the looms quickly were put out of a job because the machine could do things much faster, although it required a few specialized operators to set up and repair the equipment.
  • the owners of the fabric mills that couldn’t afford to upgrade either died out or specialized in fabrics that could not be made by the machines (which set up an arms race of sorts where the machine builders kept improving things)
  • the quality of fabric went down: when it was previously possible to have different structures of fabric with just a simple order to the worker, it took a while for machines to do something other than a simple weave (actually it took the work of Ada Lovelace, and see above mentioned arms race), and looms even today require a different range of threads than what can be hand woven, but…
  • the cost went down so much that the accessibility went through the roof. Suddenly the average pauper COULD afford to clothe their entire family with a weeks worth of clothes. New industries cropped up. Health and economic mobility soared.

This is a huge oversimplification, but history is well known to repeat itself due to human nature. Follow the bullets above with today’s arguments against AI and you will see an often ignored end result: humanity can grow to have more time and resources to improve the health and wellness of our population IF we use the tools. You can choose to complain that the contract worker isn’t going to get paid his equivalent of $5/hr for spending 2 weeks arguing back and forth about a dog logo for a new pet store, but I am going to celebrate the person who realizes they can automate a system to find new business filings and approach every new business in their area with a package of 20 logos each that were AI generated using unique prompts from their experience in logo design all while reducing their workload and making more money.

ZILtoid1991@lemmy.world on 10 Jun 11:32 collapse

GenAI is automating the more human fields, not some production line work. This isn’t gonna lead to an abundance of clothing that are maybe not artisan made, but the flooding of the art fields with low quality products. Hope you like Marvel slop, because you’re gonna get even more Marvel slop, except even worse!

Creativity isn’t having an idea of a big booba anime girl, it’s how you draw said big booba anime girl. Unless you’re one of those “idea guys”, who are still pissed off that the group of artists and programmers didn’t steal the code of Call of Duty, to put VR support into it, so you could sell if for the publisher at a markup price, because VR used to be a big thing for a while.

Dr_Nik@lemmy.world on 10 Jun 12:01 next collapse

Gotcha, so no actual discourse then.

Incidentally, I do enjoy Marvel “slop” and quite honestly one of my favorite YouTube channels is Abandoned Films youtu.be/mPQgim0CuuI

This is super creative and would never be able to be made without AI.

I also enjoy reading books like Psalm for the Wild Built. It’s almost like there’s space for both things…

petrol_sniff_king@lemmy.blahaj.zone on 10 Jun 15:30 collapse

This is creepy.

GnuLinuxDude@lemmy.ml on 10 Jun 12:23 collapse

but the flooding of the art fields with low quality products

It’s even worse than that, because the #1 use case is spam, regardless of what others think they personally gain out of it. It is exhausting filtering through the endless garbage spam results. And it isn’t just text sites. Searching generic terms into sites like YouTube (e.g. “cats”) will quickly lead you to a deluge of AI shit. Where did the real cats go?

It’s incredible that DrNik is coming out with a bland, fake movie trailer as an example of how AI is good. It’s “super creative” to repeatedly prompt Veo3 to give you synthetic Hobbit-style images that have the vague appearance of looking like VistaVision. Actually, super creative is kinda already done, watch me go hyper creative:

“Whoa, now you can make it look like an 80s rock music video. Whoa, now you can make it look like a 20s silent film. Whoa, now you can make look like a 90s sci-fi flick. Whoa, now you can make it look like a super hero film.”

ZILtoid1991@lemmy.world on 10 Jun 12:35 collapse

It even made “manual” programming worse.

Wanted to google how to modify the path variable on Linux? Here’s an AI hallucinated example, that will break your installation. Wanted to look up an algorithm? Here’s an AI hallucinated explanation, that is wrong enough at some parts, that you just end up just wasting your own time.

ShittyBeatlesFCPres@lemmy.world on 10 Jun 08:02 next collapse

My skepticism is because it’s kind of trash for general use. I see great promise in specialized A.I. Stuff like Deepfold or astronomy situations where the telescope data is coming in hot and it would take years for humans to go through it all.

But I don’t think it should be in everything. Google shouldn’t be sticking LLM summaries at the top. It hallucinates so I need to check the veracity anyway. In medicine, it can help double-check but it can’t be the doctor. It’s just not there yet and might never get there. Progress has kind of stalled.

So, I don’t “hate” any technology. I hate when people misapply it. To me, it’s (at best) beta software and should not be in production anywhere important. If you want to use it for summarizing Scooby Doo episodes, fine. But it shouldn’t be part of anything we rely on yet.

ShittyBeatlesFCPres@lemmy.world on 10 Jun 08:17 collapse

Also, it should never be used for art. I don’t care if you need to make a logo for a company and A.I. spits out whatever. But real art is about humans expressing something. We don’t value cave paintings because they’re perfect. We value them because someone thousands of years ago made it.

So, that’s something I hate about it. People think it can “democratize” art. Art is already democratized. I have a child’s drawing on my fridge that means more to me than anything at any museum. The beauty of some things is not that it was generated. It’s that someone cared enough to try. I’d rather a misspelled crayon card from my niece than some shit ChatGPT generated.

petrol_sniff_king@lemmy.blahaj.zone on 10 Jun 15:35 collapse

Yeah, “democratize art” means “I’m jealous of the cash sloshing around out there.”

People say things like “I’m not as good as this guy on TikTok.” Why do you need to be? Literally, who asked?

SpicyLizards@reddthat.com on 10 Jun 08:11 next collapse

Not much to win with.

A fake bubble of broken technology that’s not capable of doing what is advertised, it’s environmentally destructive, its used for identification and genocide, it threatens and actually takes jobs, and concentrates money and power with the already wealthy.

iopq@lemmy.world on 10 Jun 08:29 collapse

It’s either broken and not capable or takes jobs.

You can’t be both useless and destroying jobs at the same time

medem@lemmy.wtf on 10 Jun 08:35 next collapse

Have you never had a corporate job? A technology can be very much useless while incompetent ‘managers’ who believe it can do better than humans WILL buy the former to get rid of the latter, even though that’s a stupid thing to do, in order to meet their yearly targets and other similar idiotic measures of division/team ‘productivity’

iopq@lemmy.world on 12 Jun 06:56 collapse

In corporate world managers get fired for not completing projects

DesolateMood@lemmy.zip on 10 Jun 08:38 next collapse

And yet AI pulls through and somehow does manage to do both

fullsquare@awful.systems on 10 Jun 08:49 next collapse

it’s not ai taking your job, it’s your boss. all they need to believe is that language-shaped noise generator can make it work, doesn’t matter if it does (it doesn’t). then business either suffers greatly or hires people back (like klarna)

dcoe@lemmy.world on 10 Jun 10:11 collapse

It can absolutely be both. Expensive competent people are replaced with inexpensive morons all the time.

boatswain@infosec.pub on 10 Jun 08:17 next collapse

Because of studies like arxiv.org/abs/2211.03622:

Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.

Dr_Nik@lemmy.world on 10 Jun 08:36 collapse

Seems like this is a good argument for specialization. Have AI make bad but fast code, pay specialty people to improve and make it secure when needed. My 2026 Furby with no connection to the outside world doesn’t need secure code, it just needs to make kids smile.

subignition@fedia.io on 10 Jun 08:40 collapse

They're called programmers, and it's faster and less expensive all around to just have humans do it better the first time.

Dr_Nik@lemmy.world on 10 Jun 09:48 collapse

Have you talked to any programmers about this? I know several who, in the past 6 months alone, have completely changed their view on exactly how effective AI is in automating parts of their coding. Not only are they using it, they are paying to use it because it gives them a personal return on investment…but you know, you can keep using that push lawnmower, just don’t complain when the kids next door run circles around you at a quarter the cost.

just_another_person@lemmy.world on 10 Jun 10:51 next collapse

Automating parts of something as a reference tool is a WILDLY different thing than differing to AI to finalize your code, which will be shitcode.

Anybody right now who is programming that is letting AI code out there is bad at their job.

Dr_Nik@lemmy.world on 10 Jun 12:03 collapse

No argument there.

remon@ani.social on 10 Jun 11:08 next collapse

but you know, you can keep using that push lawnmower, just don’t complain when the kids next door run circles around you at a quarter the cost.

That push lawnmower will still mow the lawn in decades to come though, while your kids fancy high-tech lawnmower will explode in a few months and you’re lucky if it doesn’t burn the entire house down with it.

GnuLinuxDude@lemmy.ml on 10 Jun 11:58 next collapse

Have you had to code review someone who is obviously just committing AI bullshit? It is an incredible waste of time. I know people who learned pre-LLM (i.e. have functioning brains) and are practically on the verge of complete apathy from having to babysit ai code/coders, especially as their management keeps pushing people to use it. As in, they must use LLM as a performance metric.

fullsquare@awful.systems on 10 Jun 13:16 collapse

congratulations on offloading your critical thinking skills to a chatbot that you most likely don’t own. what are you gonna do when the bubble is over, or when dc with it burns down

MagicShel@lemmy.zip on 10 Jun 08:50 next collapse

It’s a massive new disruptive technology and people are scared of what changes it will bring. AI companies are putting out tons of propaganda both claiming AI can do anything and fear mongering that AI is going to surpass and subjugate us to back up that same narrative.

Also, there is so much focus on democratizing content creation, which is at best a very mixed bag, and little attention is given to collaborative uses (which I think is where AI shines) because it’s so much harder to demonstrate, and it demands critical thinking skills and underlying knowledge.

In short, everything AI is hyped as is a lie, and that’s all most people see. When you’re poking around with it, you’re most likely to just ask it to do something for you: write a paper, create a picture, whatever, and the results won’t impress anyone actually good at those things, and impress the fuck out of people who don’t know any better.

This simultaneously reinforces two things to two different groups: AI is utter garbage and AI is smarter than half the people you know and is going to take all the jobs.

Eat_Your_Paisley@lemm.ee on 10 Jun 09:49 next collapse

Its not particularly accurate and then there’s the privacy concerns

jyl@sopuli.xyz on 10 Jun 09:51 next collapse

  • Useless fake spam content.
  • Posting AI slop ruins the “social” part of social media. You’re not reading real human thoughts anymore, just statistically plausible words.
  • Same with machine-generated “art”. What’s the point?
  • AI companies are leeches; they steal work for the purpose of undercutting the original creators with derivative content.
  • Vibe coders produce utter garbage that nobody, especially not themselves understands, and somehow are smug about it.
  • A lot of AI stuff is a useless waste of resources.

Most of the hate is justified IMO, but a couple weeks ago I died on the hill arguing that an LLM can be useful as a code documentation search engine. Once the train started, even a reply that thought software libraries contain books got upvotes.

Lyra_Lycan@lemmy.blahaj.zone on 10 Jun 10:11 collapse

Not to mention the environmental cost is literally astronomical. I would be very interested if AI code is functional x times out of 10 because it’s success statistic for every other type of generation is much lower.

fullsquare@awful.systems on 10 Jun 13:13 collapse

chatbot DCs burn enough electricity to power middle sized euro country, all for seven fingered hands and glue-and-rock pizza

fullsquare@awful.systems on 10 Jun 13:53 next collapse

taking a couple steps back and looking at bigger picture, something that you might have never done in your entire life guessing by tone of your post, people want to automate things that they don’t want to do. nobody wants to make elaborate spam that will evade detection, but if you can automate it somebody will use it this way. this is why spam, ads, certain kinds of propaganda and deepfakes are one of big actual use cases of genai that likely won’t go away (isn’t future bright?)

this is tied to another point. if a thing requires some level of skill to make, then naturally there are some restraints. in pre-slopnami times, making a deepfake useful in black propaganda would require co-conspirator that has both ability to do that and correct political slant, and will shut up about it, and will have good enough opsec to not leak it unintentionally. maybe more than one. now, making sorta-convincing deepfakes requires involving less people. this also includes things like nonconsensual porn, for which there are less barriers now due to genai

then, again people automate things they don’t want to do. there are people that do like coding. then also there are Idea Men butchering codebases trying to vibecode, while they don’t want to and have no inclination for or understanding of coding and what it takes, and what should result look like. it might be not a coincidence that llms mostly charmed managerial class, which resulted in them pushing chatbots to automate away things they don’t like or understand and likely have to pay people money for, all while chatbot will never say such sacrilegious things like “no” or “your idea is physically impossible” or “there is no reason for any of this”. people who don’t like coding, vibecode. people who don’t like painting, generate images. people who don’t like understanding things, cram text through chatbots to summarize them. maybe you don’t see a problem with this, but it’s entirely a you problem

this leads to three further points. chatbots allow for low low price of selling your thoughts to saltman &co offloading all your “thinking” to them. this makes cheating in some cases exceedingly easy, something that schools have to adjust to, while destroying any ability to learn for students that use them this way. another thing is that in production chatbots are virtual dumbasses that never learn, and seniors are forced to babysit them and fix their mistakes. intern at least learns something and won’t repeat that mistake again, chatbot will fall in the same trap right when you run out of context window. this hits all major causes of burnout at once, and maybe senior will leave. then what? there’s no junior to promote in their place, because junior was replaced by a chatbot.

this all comes before noticing little things like multibillion dollar stock bubble tied to openai, or their mid-sized-euro-country sized power demands, or whatever monstrosities palantir is cooking, and a couple of others that i’m surely forgetting right now

and also

Is the backlash due to media narratives about AI replacing software engineers?

it’s you getting swept in outsized ad campaign for most bloated startup in history, not “backlash in media”. what you see as “backlash” is everyone else that’s not parroting openai marketing brochure

While I don’t defend them,

are you suure

e: and also, lots of these chatbots are used as accountability sinks. sorry nothing good will ever happen to you because Computer Says No (pay no attention to the oligarch behind the curtain)

e2: also this is partially side effect of silicon valley running out of ideas after crypto crashed and burned, then metaverse crashed and burned, and also after all this all of these people (the same people who ran crypto before, including altman himself) and money went to pump next bubble, because they can’t imagine anything else that will bring them that promised infinite growth, and they having money is result of ZIRP that might be coming to end and there will be fear and loathing because vcs somehow unlearned how to make money

Vanth@reddthat.com on 10 Jun 14:02 next collapse

Don’t forget problems with everything around AI too. Like in the US, the Big Beautiful Bill (🤮) attempts to ban states from enforcing AI laws for ten years.

And even more broadly what happens to the people who do lose jobs to AI? Safety nets are being actively burned down. Just saying “people are scared of new tech” ignores that AI will lead to a shift that we are not prepared for and people will suffer from it. It’s way bigger than a handful of new tech tools in a vacuum.

technocrit@lemmy.dbzer0.com on 10 Jun 15:27 next collapse

“AI” is a pseudo-scientific grift.

Perhaps more importantly, the underlying technologies (like any technology) are already co-opted by the state, capitalism, imperialism, etc. for the purposes of violence, surveillance, control, etc.

Sure, it’s cool for a chatbot to summarize stackexchange but it’s much less cool to track and murder people while committing genocide. In either case there is no “intelligence” apart from the humans involved. “AI” is primarily a tool for terrible people to do terrible things while putting the responsibility on some ethereal, unaccountable “intelligence” (aka a computer).

borokov@lemmy.world on 10 Jun 17:45 next collapse

Dunning-Kruger effect.

Lots of people now think they can be developpers because they did a shitty half working game using vibe coding.

Would you trust a surgeon that rely on ChatGPT ? So why sould you trust LLM to develop programs ? You know that airplane, nuclear power plants, and a LOT of critical infrastructure rely on programs, right ?

Treczoks@lemmy.world on 10 Jun 18:52 next collapse

AI is theft in the first place. None of the current engines have gotten their training data legally. The are based on pirated books and scraped content taken from websites that explicitely forbid use of their data for training LLMs.

And all that to create mediocre parrots with dictionaries that are wrong half the time, and often enough give dangerous, even lethal advice, all while wasting power and computational resources.

FinishingDutch@lemmy.world on 11 Jun 15:52 next collapse

If you don’t hate AI, you’re not informed enough.

It has the potential to disrupt pretty much everything in a negative way. Especially when regulations always lag behind. AI will be abused by corporations in the worst way possible, while also being bad for the planet.

And the people who are most excited about it, tend to be the biggest shitheads. Basically, no informed person should want AI anywhere near them unless they directly control it.

roserose56@lemmy.ca on 11 Jun 16:52 next collapse

Because so far we only see the negative impacts in human society IMO. Latest news haven’t help at all, not to mention how USA is moving towards AI. Every positive of AI, leads to be used in a workplace, which then will most likely lead to lay offs. I may start to think that Finch in POI, was right all along.

edit: They sell us an unfinished product, which we build in a wrong way.

Myro@lemm.ee on 12 Jun 16:05 next collapse

Many people on Lemmy are extremely negative towards AI which is unfortunate. There are MANY dangers, but there are also Many obvious use cases where AI can be of help (summarizing a meeting, cleaning up any text etc.)

Yes, the wax how these models have been trained is shameful, but unfoet9tjat ship has sailed, let’s be honest.

hexagon@lemmy.ml on 17 Jun 23:45 collapse

AI has only one problem to solve: salaries