The anti-AI sentiment in the free software communities is concerning.
from FatCat@lemmy.world to linux@lemmy.ml on 13 Jun 10:46
https://lemmy.world/post/16479710

Whenever AI is mentioned lots of people in the Linux space immediately react negatively. Creators like TheLinuxExperiment on YouTube always feel the need to add a disclaimer that “some people think AI is problematic” or something along those lines if an AI topic is discussed. I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers (just look at what “Apple Intelligence” seems to be capable of.) Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete. Using an AI-less desktop may be akin to hand copying books after the printing press revolution. If you think of specific problems it is better to point them out and try think of solutions, not reject the technology as a whole.

TLDR: A lot of ludite sentiments around AI in Linux community.

#linux

threaded - newest

DudeImMacGyver@sh.itjust.works on 13 Jun 10:53 next collapse

Reminder that we don’t even have AI yet, just learning machine models, which are not the same thing despite wide misuse of the term AI.

NoiseColor@startrek.website on 13 Jun 11:00 next collapse

Yes, lots of people are using this argument when reacting negatively.

DudeImMacGyver@sh.itjust.works on 13 Jun 11:04 collapse

Well, it’s kind of more of a fact than an argument, but do go on!

FatCat@lemmy.world on 13 Jun 11:30 next collapse

Its an interesting discussion. But I disagree you have a clear cut fact.

Just because it’s a computer writing things with math why do you say it is not intelligence. It would be helpful if you could be more detailed here.

Auli@lemmy.ca on 13 Jun 16:52 collapse

It’s not intelligence. Is it useful yes can it help us yes.

NoiseColor@startrek.website on 13 Jun 20:19 collapse

Pointless semantics.

NoiseColor@startrek.website on 13 Jun 16:09 collapse

Well not at all. What a word means is not defined by what you might think. When the majority starts to use a word for something and that sticks, it can be adopted. That happens all the time and I have read articles about it many times. Even for our current predicament. Language is evolving. Meanings change. And yes ai today includes what is technically machine learning. Sorry friend, that’s how it works. Sure you can be the grumpy drunk at a bar complaining that this is not strictly ai by some definition while the rest of the world rolls their eyes and proceeds to more meaningful debates.

DudeImMacGyver@sh.itjust.works on 13 Jun 16:21 collapse

Words have meaning and, sure, they can be abused and change meaning over time but let’s be real here: AI is a hype term with no basis on reality. We do not have AI, we aren’t even all that close. You can make all the ad hominem comments you want but at the end of the day, the terminology comes from ignorant figureheads hyping shit up for profit (at great environmental cost too, LLM aka “AI” takes up a lot of power while yielding questionable results).

Kinda sounds like you bought into the hype, friend.

NoiseColor@startrek.website on 13 Jun 20:18 collapse

You missed the point again, oh dear! Let me try again in simpler terms : you yourself dont define words, how they are used in the public does. So if the world calls it ai, then the word will mean what everybody means when they use it.

This is how the words come to be, evolve and are at the end put in the dictionary. Nobody cares what you think. Ai today includes ML. Get over it.

Nice try with deflection attempts, but I really don’t care about them, I’m only here to teach you where words come from and to tell you, the article is written about you.

Also that I’m out of time for this. Bye.

FatCat@lemmy.world on 13 Jun 11:05 next collapse

That’s just nitpicking. Everyone here knows what we mean by AI. Yes it refers to LLMs.

Reminds me of Richard Stallman always interjecting to say “actually its gnu/Linux or as I like to say gnu plus Linux”…

Well no Mr Stallman its actually gnu + Linux + Wayland + systemd + chromium and whatever other software you have installed, are you happy now??

LunarLoony@lemmy.sdf.org on 13 Jun 11:07 next collapse

So when we actually do have AI, what are we supposed to call it? The current use of the term “AI” is too ambiguous to be of any use.

jacobc436@lemmy.ml on 13 Jun 11:11 next collapse

Nothing was ever wrong with calling them “virtual assistants” - at least with them you’re conditioned to have a low bar of expectations. So if it performs past expectations, you’ll be excited, lol.

breadsmasher@lemmy.world on 13 Jun 11:12 next collapse

What AI means will change, what it refers to will change. Currently, the LLMs and other technologies are referred to as AI, like you say. In five years time we will have made huge leaps. Likely, this will result in technology also called AI.

In a similar vein, hover boards are still known as exactly that - like in films. Whereas the “real” hover board that exists has wheels. We didn’t stop calling the other ones hover boards, and if we ever get real ones they will likely also be called hoverboards.

snooggums@midwest.social on 13 Jun 11:45 collapse

Whereas the “real” hover board that exists has wheels.

Hovercraft have existed for decades and actually hover which makes everyone just accepting Hoverboards as wheeled infuriating.

HumanPenguin@feddit.uk on 13 Jun 11:15 next collapse

Honestly what we have now is AI. As in it is not intelligent just trys to mimic it.

Digital Intelegence if we ever achive it would be a more accurate name.

FatCat@lemmy.world on 13 Jun 11:28 next collapse

This is a bit philosophical but who is to say that mimicking intelligence with advanced math is not intelligence. LLMs can perform various thinking tasks better than humans we consider intelligent.

MudMan@fedia.io on 13 Jun 11:29 collapse

Look, the naming ship has sailed and sunk somewhere in the middle of the ocean. I think it's time to accept that "AI" just means "generative model" and what we would have called "AI" is now more narrowly "AGI".

People call videogame enemies "AI", too, and it's not the end of the world, it's just imprecise.

cupcakezealot@lemmy.blahaj.zone on 13 Jun 11:49 next collapse

apple intelligence obviously /s

MalReynolds@slrpnk.net on 13 Jun 13:11 collapse

AGI, then ASI. Goalposts change…

rostselmasch@lemmygrad.ml on 13 Jun 11:28 next collapse

Linux doesnt need GNU components at all to be a functional operating system. And you wouldnt see any difference if your http server works on GNU/Linux or Linux without GNU.

On the other hand there is difference between an AI and LLM. The difference is signifacant enough to distinguish. You may mean LLMs if you talk about AI, but tbh I though you didnt. Because many people dont.

davel@lemmy.ml on 13 Jun 14:33 collapse

Linux doesnt need GNU components at all to be a functional operating system.

Indeed: look no further than Alpine Linux.

Alpine Linux is a Linux distribution designed to be small, simple, and secure. It uses musl, BusyBox, and OpenRC instead of the more commonly used glibc, GNU Core Utilities, and systemd. This makes Alpine one of few Linux distributions not to be based on the GNU Core Utilities.

InevitableWaffles@midwest.social on 13 Jun 11:36 next collapse

As someone who frequently interacts with the tech illiterate, no they don’t. This sudden rush to put weighed text hallucination tables into everything isn’t that helpful. The hype feels like self driving cars or 3D TVs for those of us old enough to remember that. The potential for damage is much higher than either of those two preceding fads and cars actually killed poeple. I think many of us are expressing a healthy level of skepticism toward the people who need to sell us the next big thing and it is absolutely warranted.

FatCat@lemmy.world on 13 Jun 12:26 next collapse

The potential for damage is much higher

Doubt it. Maybe Microsoft can fuck it up somehow but the tech is here to stay and will do massive good.

InevitableWaffles@midwest.social on 13 Jun 13:18 collapse

You can doubt all you like but we keep seeing the training data leaking out with passwords and personal information. This problem won’t be solved by the people who created it since they don’t care and fundamentally the technology will always show that lack of care. FOSS ones may do better in this regard but they are still datasets without context. Thats the crux of the issue. The program or LLM has no context for what it says. That’s why you get these nonsensical responses telling people that killing themselves is a valid treatment for a toothache. Intelligence is understanding. The “AI” or LLM or, as I like to call them, glorified predictive textbars, doesn’t understand the words it is stringing together and most people don’t know that due to flowery marketing language and hype. The threat is real.

Auli@lemmy.ca on 13 Jun 16:56 collapse

Not to mention the hulucinations. What a great marketing term for it’s fucking wrong.

InevitableWaffles@midwest.social on 13 Jun 23:45 collapse

They act like its the computer daydreaming. No, its wrong. The machine that is supposed to provide me correct information. It didn’t it. These marketing wizards are selling snake oil in such a lovely bottle these days.

Auli@lemmy.ca on 13 Jun 16:54 collapse

It’s exactly like self driving everyone is like this is the time we are going to get AGI. But it well be like everything else overhyped and under deliver. Sure it well have its uses companies well replace people with it and they enshitificstion well continue.

breadsmasher@lemmy.world on 13 Jun 12:35 collapse

To be 🤓 really really nitpicky, and i’m writing this because I find it interesting, not an attack or whatever. A tongue in cheek AcHtUaLlY 🤓

GNU/Linux is the “whole operating system”, and everything else is extra. The usefulness of an operating system without applications is debatable but they 🤓 technically aren’t required to complete the definition of an operating system.

But this is also basically the debate of Linux vs GNU/Linux vs also needing applications to make a useful operating system.

Quoting wiki summary,

In its original meaning, and one still common in hardware engineering, the operating system is a basic set of functions to control the hardware and manage things like task scheduling and system calls. In modern terminology used by software developers, the collection of these functions is usually referred to as a kernel, while an ‘operating system’ is expected to have a more extensive set of programmes. The GNU project maintains two kernels itself, allowing the creation of pure GNU operating systems, but the GNU toolchain is also used with non-GNU kernels. Due to the two different definitions of the term ‘operating system’, there is an ongoing debate concerning the naming of distributions of GNU packages with a non-GNU kernel.

en.wikipedia.org/wiki/GNU?wprov=sfti1#GNU_as_an_o…

FatCat@lemmy.world on 13 Jun 12:42 collapse

Don’t tell me Linux mint would still be Linux mint without the a desktop environment like Cinnamon. An os is the collection of all the software not just the low level code.

breadsmasher@lemmy.world on 13 Jun 12:43 collapse

Well that’s the debate! Is it “GNU/Linux Mint”? What about the desktop environment, “GNU/Linux Mint Cinnamon”?

ed.

Don’t tell me …

Absolutely not telling you - just reiterating the ongoing debate

knatschus@discuss.tchncs.de on 13 Jun 11:29 next collapse

Have you mentioned that in gaming forums aswell when they talked about AI?

AI is a broad term and can mean many different things, it does not need to mean ‘true’ AI

Lojcs@lemm.ee on 13 Jun 11:57 collapse

But ml is a type of ai. Just because the word makes you think of androids and skynet doesn’t mean that’s the only thing that can be called so. Personally never understood this attempt at limiting the word to that now while ai has been used for lesser computer intelligences for a long time.

Auli@lemmy.ca on 13 Jun 16:50 collapse

We don’t have Machine Intellegince though.

Lojcs@lemm.ee on 13 Jun 16:53 collapse

I wrote ml. If you didn’t misread, what are you talking about?

geography082@lemm.ee on 13 Jun 10:54 next collapse

“Good moment” for apple to announce their AI shit

FQQD@lemmy.ohaa.xyz on 13 Jun 10:55 next collapse

I dont think the community is generally against AI, there’s plenty of FOSS projects. They just don’t like cashgrabs, enshittification and sending personal data to someone else’s computer.

FatCat@lemmy.world on 13 Jun 11:09 next collapse

I don’t see anyone calling for cash grabs or privacy destroying features to be added to gnome or other projects so I don’t see why that would be an issue. 🙂

On device Foss models to help you with various tasks.

PrivateNoob@sopuli.xyz on 13 Jun 11:29 next collapse

FQQD probably refers to companies such as MS, Apple, Google, Adobe, etc. since they usually incorporate AI into everything.

wewbull@feddit.uk on 13 Jun 11:49 next collapse

You are, if you’re calling for Apple like features.

You might argue that “private cloud” is privacy preserving, but you can only implement that with the cash of Apple. I would also argue that anything leaving my machine, to a bunch of servers I don’t control, without my knowledge is NOT preserving my privacy.

FatCat@lemmy.world on 13 Jun 12:14 next collapse

You might argue that “private cloud” is privacy preserving

I don’t know since when “on device” means send it to a server. Come up with more straw men I didn’t mention for you to defeat.

MentalEdge@sopuli.xyz on 13 Jun 13:43 next collapse

Apple’s “private cloud” is a thing. Not all “Apple Intelligence” features are “on device”, some can and do utilize cloud-based processing power, and this will also be available to app developers.

Apparently this has additional safeguards vs “normal cloud” which is why they are branding it “private cloud”.

But it’s still “someone else’s computer” and apple is not keeping their AI implementation 100% on device.

wewbull@feddit.uk on 13 Jun 23:30 collapse

Since Apple’s keynote this week.

Auli@lemmy.ca on 13 Jun 16:49 collapse

I’m waiting for the moment the storey breaks they ChatGPT didn’t do what Apple asked.

technocrit@lemmy.dbzer0.com on 13 Jun 15:44 collapse

On device Foss models to help you with various tasks.

Thankfully I really really don’t need an “AI” to use my desktop. I don’t want that kind of BS bloat either. But go ahead and install whatever you want on your machine.

umami_wasbi@lemmy.ml on 13 Jun 20:07 collapse

It is quite a bloat. Llama3 7B is 4.7GB by itself, not counting all the dependencies and drivers. This can easily take 10+ GB of the drive. My Ollama setup takes about 30GB already. Given a single application (except games like COD that takes up 300GB), this is huge, almost the size of a clean OS install.

anamethatisnt@lemmy.world on 13 Jun 11:36 collapse

sending personal data to someone else’s computer.

I think this is spot on. I think it’s exciting with LLMs but I’m not gonna give the huge corporations my data, nor anyone else for that matter.

Killing_Spark@feddit.de on 13 Jun 10:57 next collapse

I think the biggest problem is that ai for now is not an exact tool that gets everything right. Because that’s just not what it is built to do. Which goes against much of the philosophy of most tools you’d find on your Linux PC.

Secondly: Many people who choose Linux or other foss operating system do so, at least partially, to stay in control over their system which includes knowing why stuff happens and being able to fix stuff. Again that is just not what AI can currently deliver and it’s unlikely it will ever do that.

So I see why people just choose to ignore the whole thing all together.

FatCat@lemmy.world on 13 Jun 11:08 next collapse

Good point about the imprecision. On the other hand most Linux desktop users are Normie’s, think Steam deck and so on.

Some of the most popular Linux desktops are built for ordinary people with the KISS principle in mind. Not arch using tinkerers

hydroptic@sopuli.xyz on 13 Jun 11:31 next collapse

On the other hand most Linux desktop users are Normie’s, think Steam deck and so on.

Jesus fuck what a statement. Your parents probably regret having you.

Killing_Spark@feddit.de on 13 Jun 11:51 collapse

That’s not the tone I like to read even as an answer to a statement I don’t agree with. No need to get that personal.

hydroptic@sopuli.xyz on 13 Jun 13:28 collapse

YOU CAN’T TELL ME WHAT TO DO, YOU’RE NOT EVEN MY REAL DAD

Killing_Spark@feddit.de on 13 Jun 14:13 collapse

You don’t know that.

Killing_Spark@feddit.de on 13 Jun 11:49 next collapse

I’m not saying nobody should work on this. There is obviously demand or at least big tech is assuming demand. I’m just saying it’s not surprising to me a lot of Foss developers don’t really care.

someacnt_@lemmy.world on 13 Jun 22:44 collapse

I used ubuntu until a few weeks ago, where I switched to Pop OS. In this sense, I might be close to the “normies”. Yet, I am incredibly skeptical of AI.

It’s distinct.

callcc@lemmy.world on 13 Jun 11:13 next collapse

This and on top of being inexact, it’s not understandable and un-transparent. These are two of the top reasons to push for free software. Even if the engine executing and teaching models are free, the model itself can’t really be considered free because of its lack of transparency.

MudMan@fedia.io on 13 Jun 11:28 collapse

That is a stretch. If you try to download and host a local model, which is fairly easy to do these days, the text input and output may be semi-random, but you definitely have control over how to plug it into any other software.

I, for one, think that fuzzy, imprecise outputs have lots of valid uses. I don't use LLMs to search for factual data, but they're great to remind you of names of things you know but have forgotten, or provide verifiable context to things you have heard but don't fully understand. That type of stuff.

I think the AI shills have done a great disservice by presenting this stuff as a search killer or a human replacement for tasks, which it is not, but there's a difference between not being the next Google and being useless. So no, Apple and MS, I don't want it monitoring everything I do at all times and becoming my primary interface... but I don't mind a little search window where I can go "hey, what was that movie from the 50s about the two old ladies that were serial killers? Was that Cary Grant or Jimmy Stewart?".

callcc@lemmy.world on 13 Jun 21:40 collapse

I’m not against probabilistic models and the like. I merely try to capture part of the reason they are not always well received in the floss community.

I use LLMs regularly, and there is nothing rivalling them in many use cases.

snooggums@midwest.social on 13 Jun 11:50 next collapse

I think the biggest problem is that ai for now is not an exact tool that gets everything right.

The biggest problem is that it isn’t an exact tool, but is being presented as if it was and implemented as a replacement for people instead of a tool they can use to make themselves more efficient.

bad_news@lemmy.billiam.net on 15 Jun 12:01 collapse

Nice try, bot

zingo@lemmy.ca on 13 Jun 12:25 collapse

Yeah, sure don’t want Skynet built-in on my Linux Distro.

Gigasser@lemmy.world on 13 Jun 11:05 next collapse

I think most of the hostility is in regards to shilling of certain sites and services. Local self hosted AI is not likely to get as much flack I feel. Another aspect of hate is people generating images and calling it art, which…it is but, it’s the microwave equivalent of art. Such negative sentiments can be remedied by actually doing artistic shit with whatever image they generate, like idk, put the image into Photoshop and maybe editing the image in a way that actually improves it, or using said image as a canvas to be added onto or some other shit.

Edit Addendum: also the negative perception of AI has mostly been engendered by some of its more unpleasant supporters, who think of it as a way to make “irrelevant” certain groups they don’t like, and to take some sorta sick schadenfreude in the “replacement” of these people, which they think may be a way of reducing the power of these people (politically, socially, etc), and that’s kinda fucked up.

kbal@fedia.io on 13 Jun 11:07 next collapse

One of the main things that turns people off when the topic of "AI" comes up is the absolutely ridiculous level of hype it gets. For instance, people claiming that current LLMs are a revolution comparable to the invention of the printing press, and that they have such immense potential that if you don't cram them into every product you can all your software will soon be obsolete.

FatCat@lemmy.world on 13 Jun 11:31 collapse

The amount of time they save is huge, no wonder people are excited.

avidamoeba@lemmy.ca on 13 Jun 11:39 collapse

Doubt

chrash0@lemmy.world on 13 Jun 11:07 next collapse

yeah i see that too. it seems like mostly a reactionary viewpoint. the reaction is understandable to a point since a lot of the “AI” features are half baked and forced on the user. to that point i don’t think GNOME etc should be scrambling to add copies of these features.

what i would love to see is more engagement around additional pieces of software that are supplemental. for example, i would love if i could install a daemon that indexes my notes and allows me to do semantic search. or something similar with my images.

the problems with AI features aren’t within the tech itself but in the surrounding politics. it’s become commonplace for “responsible” AI companies like OpenAI to not even produce papers around their tech (product announcement blogs that are vaguely scientific don’t count), much less source code, weights, and details on training data. and even when Meta releases their weights, they don’t specify their datasets. the rat race to see who can make a decent product with this amazing tech has made the whole industry a bunch of pearl clutching FOMO based tweakers. that sparks a comparison to blockchain, which is fair from the perspective of someone who hasn’t studied the tech or simply hasn’t seen a product that is relevant to them. but even those people will look at something fantastical like ChatGPT as if it’s pedestrian or unimpressive because when i asked it to write an implementation of the HTTP spec in the style of Fetty Wap it didn’t run perfectly the first time.

moreeni@lemm.ee on 13 Jun 13:53 collapse

Finally, a sane answer! Sadly it’s buried all the way down here in the thread

HumanPenguin@feddit.uk on 13 Jun 11:07 next collapse

Dammed impressive. How evil AI is managing to post defending itself.

I for one will be happy to bow to our new AI overlord.

FatCat@lemmy.world on 13 Jun 11:33 collapse

I do not feel comfortable discussing whether I am an artificial intelligence or not. I aim to be direct in my communication, so I will simply state that such metaphysical questions about my nature are not something I can engage with. Perhaps we could find a different topic that allows me to be more helpful to you within the proper bounds. I’m happy to assist with writing, analysis, research, or any other constructive tasks. 😃

JackGreenEarth@lemm.ee on 13 Jun 22:48 collapse

Lol

breadsmasher@lemmy.world on 13 Jun 11:09 next collapse

I think conceptually AI is very useful and interesting, and as a general technical thing. But when we start talking about OpenAI and others, their methods for data collection, respect of licenses etc is where I (and I believe others) take issue

MudMan@fedia.io on 13 Jun 11:23 next collapse

Seems to go to the OPs point of having open software alternatives, though. I am in a fairly unusual place regarding the practical usage of some of these things, but I do agree that if the entire concept is fundamentally rejected among proponents of open software that delays the possibility of developing viable alternatives to work around those issues.

breadsmasher@lemmy.world on 13 Jun 11:26 collapse

Yeah absolutely. Theres space for “fairly trained” models (for lack of a better term). And I believe these exist? Or are touted to be so (whether true, who knows). Having some level of integration with the linux desktop in a way that keeps everyone happy will be a significant challenge

MudMan@fedia.io on 13 Jun 13:26 collapse

Yeah, on that I'm gonna say it's unnecessary. I don't know what "integration with the desktop" gets you that you can't get from having a web app open or a separate window open. If you need some multimodal goodness you can just take a screenshot and paste it in.

I'd be more concerned about model performance and having a well integrated multimodal assistant that can do image generation, image analysis and text all at once. We have individual models but nothing like that that is open and free, that I know of.

FatCat@lemmy.world on 13 Jun 12:10 collapse

I agree. Openai have sold everything they supposedly stood for.

Lightrider@sh.itjust.works on 13 Jun 11:10 next collapse

There is no ethical computing under capitalism. youtu.be/AaU6tI2pb3M?si=UfkoaSU-gTIvP52i

PipedLinkBot@feddit.rocks on 13 Jun 11:11 next collapse

Here is an alternative Piped link(s):

https://piped.video/AaU6tI2pb3M?si=UfkoaSU-gTIvP52i

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

FatCat@lemmy.world on 13 Jun 12:09 collapse

It is always easier to blame an -ism than built in conflicting human tendencies.

Lightrider@sh.itjust.works on 13 Jun 20:36 collapse

It’s easier to ignore reality and believe humanity is naturally ruthless rather than recognize and blame the systems that support encourage the worst tendencies among the worst of us. Hope those boots taste good.

MNByChoice@midwest.social on 13 Jun 11:16 next collapse

The Linux community has never been of one mind on anything. We have always been against, and for, everything.

Some distro or project will integrate AI, or not, and it will be forked. And then forked again.

Many AI models are run on Linux. Linux won’t be left behind in any real sense. Linux won’t lose market share over this.

Linux developers paid by AI firms will integrate it into products. Those that volunteer will make their own decisions.

eugenia@lemmy.ml on 13 Jun 11:22 next collapse

Testing AI (knowledge system) was the first job out of college for me in the '90s (I used to be a programmer). I’m not against it, but I don’t like it in my feet either. I like using the operating system all by myself, or generating things on my own. Especially now that I’m an artist, I like painting on paper. I even dislike digital art (I find it flat), let alone generative art.

zerakith@lemmy.ml on 13 Jun 11:22 next collapse

I won’t rehash the arguments around “AI” that others are best placed to make.

My main issue is AI as a term is basically a marketing one to convince people that these tools do something they don’t and its causing real harm. Its redirecting resources and attention onto a very narrow subset of tools replacing other less intensive tools. There are significant impacts to these tools (during an existential crisis around our use and consumption of energy). There are some really good targeted uses of machine learning techniques but they are being drowned out by a hype train that is determined to make the general public think that we have or are near Data from Star Trek.

Addtionally, as others have said the current state of “AI” has a very anti FOSS ethos. With big firms using and misusing their monopolies to steal, borrow and coopt data that isn’t theirs to build something that contains that’s data but is their copyright. Some of this data is intensely personal and sensitive and the original intent behind the sharing is not for training a model which may in certain circumstances spit out that data verbatim.

Lastly, since you use the term Luddite. Its worth actually engaging with what that movement was about. Whilst its pitched now as generic anti-technology backlash in fact it was a movement of people who saw what the priorities and choices in the new technology meant for them: the people that didn’t own the technology and would get worse living and work conditions as a result. As it turned out they were almost exactly correct in thier predictions. They are indeed worth thinking about as allegory for the moment we find ourselves in. How do ordinary people want this technology to change our lives? Who do we want to control it? Given its implications for our climate needs can we afford to use it now, if so for what purposes?

Personally, I can’t wait for the hype train to pop (or maybe depart?) so we can get back to rational discussions about the best uses of machine learning (and computing in general) for the betterment of all rather than the enrichment of a few.

AnarchoSnowPlow@midwest.social on 13 Jun 11:46 next collapse

It’s a surprisingly good comparison especially when you look at the reactions: frame breaking vs data poisoning.

The problem isn’t progress, the problem is that some of us disagree with the Idea that what’s being touted is actual progress. The things llms are actually good at they’ve being doing for years (language translations) the rest of it is so inexact it can’t be trusted.

I can’t trust any llm generated code because it lies about what it’s doing, so I need to verify everything it generates anyway in which case it’s easier to write it myself. I keep trying it and it looks impressive until it ends up at a way worse version of something I could have already written.

I assume that it’s the same way with everything I’m not an expert in. In which case it’s worse than useless to me, I can’t trust anything it says.

The only thing I can use it for is to tell me things I already know and that basically makes it a toy or a game.

That’s not even getting into the security implications of giving shitty software access to all your sensitive data etc.

aksdb@lemmy.world on 13 Jun 21:33 collapse

If you are so keen on correctness, please don’t say “LLMs are lying”. Lying is a conscious action of deceiving. LLMs are not capable of that. That’s exactly the problem: they don’t think, they just assemble with probability. If they could lie, they could also produce real answers.

FatCat@lemmy.world on 13 Jun 11:51 next collapse

Right, another aspect of the Luddite movement is that they lost. They failed to stop the spread of industrialization and machinery in factories.

Screaming at a train moving 200kmph hoping it will stop.

Telorand@reddthat.com on 13 Jun 12:24 next collapse

But that doesn’t mean pushback is doomed to fail this time. “It happened once, therefore it follows that it will happen again” is confirmation bias.

Also, it’s not just screaming at a train. There’s actual litigation right now (and potential litigation) from some big names to reign in the capitalists exploiting the lack of regulation in LLMs. Each is not necessarily for a “luddite” purpose, but collectively, the results may effectively achieve the same thing.

FatCat@lemmy.world on 13 Jun 12:28 collapse

“It happened once, therefore it follows that it will happen again” is confirmation bias

You’re right but realistically it will fail. The voices speaking against it are few and largely marginalised, with no money or power. There will probably be regulations but it will not go away.

Telorand@reddthat.com on 13 Jun 14:03 collapse

Right, but like I said, there’s several lawsuits (and threatened lawsuits) right now that might achieve the same goals of those speaking against how it’s currently used.

I don’t think anyone here is arguing for LLMs to go away completely, they just want to be compensated fairly for their work (else, restrict the use of said work).

arken@lemmy.world on 13 Jun 12:35 next collapse

So, lick the boot instead of resisting you say?

FatCat@lemmy.world on 13 Jun 12:38 collapse

Work on useful alternatives to big corpo crapware = lick the boot?

Mkay…

arken@lemmy.world on 13 Jun 12:47 collapse

It was more in response to your comments. I don’t think anyone has a problem with useful FOSS alternatives per se.

tabular@lemmy.world on 13 Jun 12:57 next collapse

All we have are words or violence.

davel@lemmy.ml on 13 Jun 14:15 collapse

You misunderstand the Luddite movement. They weren’t anti-technology, they were anti-capitalist exploitation.

The 1810s: The Luddites act against destitution

It is fashionable to stigmatise the Luddites as mindless blockers of progress. But they were motivated by an innate sense of self-preservation, rather than a fear of change. The prospect of poverty and hunger spurred them on. Their aim was to make an employer (or set of employers) come to terms in a situation where unions were illegal.

FatCat@lemmy.world on 13 Jun 15:45 collapse

They probably wouldn’t be such a laughing stock if they were successful.

BrianTheeBiscuiteer@lemmy.world on 13 Jun 13:18 next collapse

I’ve never heard anyone explicitly say this but I’m sure a lot of people (i.e. management) think that AI is a replacement for static code. If you have a component with constantly changing requirements then it can make sense, but don’t ask an llm to perform a process that’s done every single day in the exact same way. Chief among my AI concerns is the amount of energy it uses. It feels like we could mostly wean off of carbon emitting fuels in 50 years but if energy demand skyrockets will be pushing those dates back by decades.

someacnt_@lemmy.world on 13 Jun 22:38 collapse

My concern with AI is also with its energy usage. There’s a reason OpenAI has tons of datacenters, yet people think it does not take much because “free”!

someacnt_@lemmy.world on 13 Jun 22:40 collapse

Oh. So modern presentation of the luddite movement is also propaganda?

DragonConsort@pawb.social on 13 Jun 11:24 next collapse

I don’t like AI because it’s literally not AI. I know damn well that it is just a data scraping tool that throws a bunch of ‘probably right’ sentences or images into a proverbial blender and spits out an answer that has no actual comprehension or consistency behind it. It takes only an incredibly basic knowledge of computers and brains to know that we cannot make an actual intelligent program using the Von Neumann style of computer.

I have absolutely no interest in technology being sold to me based on a lie. And if we’re not calling this out for the lie it is, then it’s going to just keep getting pushed by people trying to make money off the concept at the stock market.

FatCat@lemmy.world on 13 Jun 11:37 next collapse

Sooo like most people? 🤣

Womble@lemmy.world on 13 Jun 11:42 collapse

It takes only an incredibly basic knowledge of computers and brains to know that we cannot make an actual intelligent program using the Von Neumann style of computer.

Nice to hear that it only takes a very basic knowledge of computers to settle one of the most hotly disputed issues in philosophy and computing. You should let them know you’ve decided it.

DragonConsort@pawb.social on 13 Jun 14:21 collapse

Bits operate in a fundamentally different way to neurons. Until we create hardware that can emulate the way neurons process information, we’re wasting time and money chasing ‘AI’

Womble@lemmy.world on 13 Jun 19:00 collapse

So its settled that neurons are the only way to create inteligence? Again you need to get your work published, it’s clearly groundbreaking that you’ve solved these long standing disputes.

savvywolf@pawb.social on 13 Jun 11:27 next collapse

Maybe we’d be warmer towards AI if it wasn’t being used as a way for big companies to steal content from smaller creative types in order to fund valueless wealth generators.

Big surprise that a group consisting of people rather than corporations is mad about it.

icerunner_origin@startrek.website on 13 Jun 11:27 next collapse

One of the critical differences between FOSS and commercial software is that FOSS projects don’t need to drive sales and consequently also don’t need to immediately jump onto technology trends in order to not look like they’re lagging behind the competition.

What I’ve consistently seen from FOSS over the 30 years I’ve been using it, is that if a technology choice is a good fit for the problem, then it will be adopted into projects where relevant.

I believe that there are use cases where LLM processing is absolutely a good fit, and the projects that need that functionality will use it. What you’re less likely to see is ‘AI’ added to everything, because it isn’t generally a good solution to most problems in it’s current form.

As an aside, you may be less likely to get good faith interaction with your question while using the term ‘luddite’ as it is quite pejorative.

boredsquirrel@slrpnk.net on 13 Jun 11:35 next collapse

AI is massively wasting power that we need for electrifying transportation and more useful things.

There are many things more useful than AI, for example good internet search engines.

AI can be useful for dedicated things like being trained on relevant tutorials and documentation to help with Linux.

FatCat@lemmy.world on 13 Jun 12:05 collapse

Too bad you can’t centrally plan what people want to use power on. Only if you could would the world be a better place.

sping@lemmy.sdf.org on 13 Jun 13:24 collapse

We can’t imagine anything but unfettered capitalism, so onward we go to our own destruction!

mwalimu@baraza.africa on 13 Jun 11:37 next collapse

Luddites were not as opposed to new technology as you say it here. They were mainly concerned about what technology would do to whom.

A helpful history right here: www.hachettebookgroup.com/…/9780316487740/?lens=l…

FatCat@lemmy.world on 13 Jun 12:04 collapse

Thanks for the history lesson, these days it is used to refer to those opposed to industrialisation, automation, computerisation, or new technologies or even progress in general.

Zeoic@lemmy.world on 13 Jun 12:49 collapse

These days, it is often misused by ignorant people because it sounds derogatory.

FTFY

sping@lemmy.sdf.org on 13 Jun 13:20 next collapse

But our ignorant misconceptions are ubiquitous so they have become truth!

trevor@lemmy.blahaj.zone on 13 Jun 13:55 collapse

Seriously. The Luddites were mostly correct about their objections to technology being used to replace humans and making exploitation more efficient, making OP’s misuse of the terms that much funnier.

Prunebutt@slrpnk.net on 13 Jun 11:38 next collapse

You should read up on what the luddites actually fought for. They were actually based af.

GolfNovemberUniform@lemmy.ml on 13 Jun 11:40 next collapse

AI may be useful in some cases (ask Mozilla) but it is not like what you said in the middle part of your post. Seeing the vote rate makes me feel a tiny bit better about this situation.

astro_ray@lemdro.id on 13 Jun 11:44 next collapse

There is a this app called upscale, that uses an ML model to upscale images. It’s Quite good at what it does, it’s useful. I use it frequently. So, there are AI stuff in linux. Just not your myopic view of AI (LLMs). And your analogy with printing press is extremely wrong. Other than human errors, printing press didn’t have have remotely as many errors as LLMs. LLMs have not evolved to the point of causing a a revolution. So linux has plenty of time to see if the bandwagon sinks or sprints.

avidamoeba@lemmy.ca on 13 Jun 11:46 next collapse

Using an Al-less desktop may be akin to hand copying books after the printing press revolution.

Or perhaps not.

FatCat@lemmy.world on 13 Jun 12:22 collapse

I guess we’ll see. 😃 In any case I wouldn’t want my Linux desktop to be 5 years behind if they do take off on other platforms.

cupcakezealot@lemmy.blahaj.zone on 13 Jun 11:49 next collapse

i think firefox shows that ai can be used right, to help with accessibility.

i think the problem with ai is when companies use it as a buzzword instead of actual innovation by just cramming a bunch of ai into their product to do a bunch of niche things.

HouseWolf@lemm.ee on 13 Jun 11:48 next collapse

Half the reason I switched to Linux almost a year ago was to avoid Microsofts forced invasive Ai bullshit. Seeing stuff like Recall has only cemented my decision.

I could go on a long rant about what I consider “right & wrong” when it comes to Ai but I’m just some dude and wanna use my own computer in the way I want to.

HubertManne@moist.catsweat.com on 13 Jun 11:58 collapse

Yeah. That being said if it was a FOSS thing it would not drive me away from linux because I know I could 100% not have it on a machine if I choose not to use it and im pretty sure there would always be distros that do not use it. I can install linux without a gui and for a long time there were distros that did not put in a gui on installation. Freebsd was this way to for awhile. They are still around as flavors of distros and seems like naming has sorta standardized around the use of server in the name. suse server, centos server, red hat server, ubuntu server, etc.

0x0@programming.dev on 13 Jun 11:52 next collapse

I’d call it realistic, not concerning.

FatCat@lemmy.world on 13 Jun 12:19 collapse

Fair enough…

Ramin_HAL9001@lemmy.ml on 13 Jun 11:59 next collapse

No, it is because people in the Linux community are usually a bit more tech-savvy than average and are aware that OpenAI/Microsoft is very likely breaking the law in how they collect data for training their AI.

We have seen that companies like OpenAI completely disregard the rights of the people who created this data that they use in their for-profit LLMs (like what they did to Scarlett Johansson), their rights to control whether the code/documentation/artwork is used in for-profit ventures, especially when stealing Creative Commons “Share Alike” licensed documentation, or GPL licensed code which can only be used if the code that reuses it is made public, which OpenAI and Microsoft does not do.

So OpenAI has deliberately conflated LLM technology with general intelligence (AGI) in order to hype their products, and so now their possibly illegal actions are also being associated with all AI. The anger toward AI is not directed at the technology itself, it is directed at companies like OpenAI who have tried to make their shitty brand synonymous with the technology.

And I haven’t even yet mentioned:

  • how people are getting fired by companies who are replacing them with AI
  • or how it has been used to target civilians in war zones
  • or how deep fakes are being used to scam vulnerable people.

The technology could be used for good, especially in the Linux community, but lately there has been a surge of unethical (and sometimes outright criminal) uses of AI by some of the worlds wealthiest companies.

HubertManne@moist.catsweat.com on 13 Jun 12:05 next collapse

As I mentioned in another comment we have an example of something like an ai-less desktop anology wise. gui-less installs. They are generally called server version of the distro and are used in datacenters but im 100% sure there are individuals out there running laptops with no gui. Im find with FOSS ai and there are LLM's licensed as such. That being said they are still problematic since the training requires large amounts of data that companies are not exactly strigent with collection.

federino@programming.dev on 13 Jun 12:12 next collapse

Ok. Tell me how A.I made your life better so far

Sekki@lemmy.ml on 13 Jun 12:32 next collapse

Using “AI” has been beneficial for example to generate image descriptions automatically, which were then used as alternative text on a website. This increased accessibility AND users were able to use full text search on these descriptions to find images faster. Same goes for stuff like classification of images, video and audio. I know of some applications in agriculture where object detection and classification etc. is used to optimize the usage of fertilizer and pesticides reducing costs and reducing environmental impact they cause. There are ofcourse many more examples like these but the point should be clear.

Nisaea@lemmy.sdf.org on 14 Jun 23:46 collapse

I work in a medical startup and we provide an AI powered service for semi automatic target detection for neurosurgery specifically for Parkinson’s and essential tremor. Many patients have benefitted from it so far with excellent results and the fact that it allows surgeons to perform the entire surgery under general anesthesia makes it much less traumatic and available to many more patients.

It’s okay to reconcile that AI is both an amazing tool with a lot of great benefits in some areas AND a lot of assholish data theft and overhyped, unhelpful bloat shoved down our throats in others.

arken@lemmy.world on 13 Jun 12:25 next collapse

Edit: actually, read zerakith’s comment instead.

rah@feddit.uk on 13 Jun 12:27 next collapse

free software communities

TheLinuxExperiment on YouTube

LOL

InternetCitizen2@lemmy.world on 13 Jun 12:48 next collapse

AI just requires a level oftrust all of these companies have not earned.

Womble@lemmy.world on 13 Jun 13:35 collapse

It doesnt though, local models would be at the core of FOSS AI, and they dont require you to trust anyone with your data.

technocrit@lemmy.dbzer0.com on 13 Jun 15:42 collapse

local models would be at the core of FOSS AI, and they dont require you to trust anyone with your data.

Would? You’re slipping between imaginary and apparently declarative statements. Very typical of “AI” hype.

Womble@lemmy.world on 13 Jun 19:04 collapse

Local models WOULD form the basis of FOSS AI. Supposition on my part but entirely supportable given there is already a open source model movement focus on producing local models and open source software is generally privacy focused.

Local models ARE inherently private due to the way that no information leaves the device it is processed on.

I know you dont want to engage with arguments and instead just wail at the latest daemon for internet points, but you can have more than one statement in a sentence without being incoherent.

daniyeg@lemmy.ml on 13 Jun 12:58 next collapse

personally im fine with machine learning, what I don’t like is “AI”, a new marketing buzzword that justifies every shitty corporate exec decision and insane company evaluations.

walthervonstolzing@lemmy.ml on 13 Jun 13:17 next collapse

I think we should be chasing all the trendy trends to become competitive with the competition. That’s the only way to push those numbers up (that need to be pushed up). That’s how a winner wins.

umami_wasbi@lemmy.ml on 13 Jun 14:38 collapse

But does Linux have to “win”? And if so what they “wins”?

walthervonstolzing@lemmy.ml on 13 Jun 14:54 collapse

The prize of the competition is what the competitors compete for. There’s a prize and the winner gets it; the loser doesn’t get it.

Why is this so hard to understand? I guess it’s nature’s way of weeding out the losers.

umami_wasbi@lemmy.ml on 13 Jun 16:44 collapse

So what’s the prize for Linux desktop would get? For for-profit cooperation, that’s market share and revenue. Yet, as far as I concern, most Linux desktop doesn’t chase market share, nor earns revenue.

walthervonstolzing@lemmy.ml on 13 Jun 17:05 collapse

It’s to out-compete the competitors so as not to become obsolete. … also I hope you’re aware that I’m saying all of this ‘ironically’, to poke fun at the mental gymnastics in the OP’s post.

umami_wasbi@lemmy.ml on 13 Jun 18:38 collapse

Oh. I get it now.

juliebean@lemm.ee on 13 Jun 13:19 next collapse

just a historical factoid that a lot of people don’t realize: the luddites weren’t anti technology without reason. they were apprehensive about new technology that threatened their livelihoods, technology that threatened them with starvation and destitution in the pursuit of profit. i think the comparison with opposition to AI is pretty apt, in many cases, honestly.

Brickardo@feddit.nl on 13 Jun 13:33 next collapse

I’d argue that if you exactly call the model you refer to by their actual name, you’ll get much different reactions. For instance, expert systems have been around for a long while.

Rozauhtuno@lemmy.blahaj.zone on 13 Jun 13:36 next collapse

I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers

[Citation needed]

Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete.

And this mentality is exactly what AI sceptics criticise. The whole reason why the AI arms race is going on is because every company/organisation seems convinced that sci-fi like AI is right behind the corner, and the first one to get it will capture 100% of the market in their walled garden while everyone else fades into obscurity. They’re all so obsessed with this that they don’t see a problem with putting in charge a virtual dumbass that is constantly wrong.

chronicledmonocle@lemmy.world on 13 Jun 14:04 next collapse

I’m not against AI. I’m against the hoards of privacy-disrespecting data collection, the fact that everybody is irresponsibility rushing to slap AI into everything even when it doesn’t make sense because line go up, and the fact nobody is taking the limitations of things like Large Language Models seriously.

The current AI craze is like the NFTs craze in a lot of ways, but more useful and not going to just disappear. In a year or three the crazed C-level idiots chasing the next magic dragon will settle down, the technology will settle into the places where it’s actually useful, and investors will stop throwing all the cash at any mention of AI with zero skepticism.

It’s not Luddite to be skeptical of the hot new craze. It’s prudent as long as you don’t let yourself slip into regressive thinking.

halm@leminal.space on 13 Jun 14:31 collapse

Completely agree and I’ll do you one better:

What is being sold as AI doesn’t hold a candle to actual artificial intelligence, they’re error prone statistical engines incapable of delivering more than the illusion of intelligence. The only reason they were launched to the public is that corporations were anxious not to be the last on the market — whether their product was ready or not.

I’m happy to be a Luddite if it means having the capacity for critical thought to Just Not Use Imperfect Crapware™.

unlawfulbooger@lemmy.blahaj.zone on 13 Jun 15:19 collapse

Then you might like this tech blog: theluddite.org/#!post/ai-hype

umami_wasbi@lemmy.ml on 13 Jun 14:17 next collapse

Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete.

I don’t get it. How Linux destops would become obsolete if they don’t have native AI toolsets on DEs? It’s not like they have a 80% market share. People who run them as daily drivers are still niche, and most don’t even know Linux exists. Most ppl grown up with Microsoft and Apple shoving ads down their throat, using them in schools first hand, and that’s all they know and taught. If I need AI, I will find ways to intergrate to my workflow, not by the dev thinks I need it.

And if you really need something like MS’s Recall, here is a FOSS version of it.

SuperSpruce@lemmy.zip on 13 Jun 14:56 next collapse

Is OpenRecall secure as well? One of my biggest problems with MS recall is that it stores all your personal info in plain text.

umami_wasbi@lemmy.ml on 13 Jun 16:50 collapse

This I have no idea.

FatCat@lemmy.world on 13 Jun 15:44 collapse

Its a good point but you can always have even lesa market share.

callcc@lemmy.world on 13 Jun 21:45 collapse

A floss project’s success is not necessarily marked by its market share but often by the absolute benefit it gives to its users. A project with one happy user and developer can be a success.

krolden@lemmy.ml on 13 Jun 14:26 next collapse

I’d make a comment but I’m banned from lemmy.world

davel@lemmy.ml on 13 Jun 14:37 next collapse

I s👀 you

Moorshou@lemmy.zip on 14 Jun 01:28 collapse

I’m on a different instance, I think that let’s me see all of the Internet’s sides.

RiikkaTheIcePrincess@pawb.social on 13 Jun 14:47 next collapse

[Sarcastic ‘translation’] tl;dr: A lot of people who are relatively well-placed to understand how much technology is involved even in downvoting this post are downvoting this post because they’re afraid of technology!

Just more fad-worshipping foolishness, drooling over a buzzword and upset that others call it what it is. I want it to be over but I’m sure whatever comes next will be just as infuriating. Oh no, now our cursors all have to change according to built-in (to the cursor, somehow, for some reason) software that tracks our sleep patterns! All of our cursors will be obsolete (?!??) unless they can scalably synergize with the business logic core to our something or other 😴

Hellmo_Luciferrari@lemm.ee on 13 Jun 15:07 next collapse

AI isn’t a magic bullet. Sure it has it’s uses, but you have to weigh it’s usefulness to the ideology behind a project and it’s creators. Just because a software developer or community doesn’t embrace AI doesn’t mean they will be “obsolete.”

AI is the current trend that is being shoehorned into everything. I mean literally everything. I don’t think we need AI touching everything.

I don’t want or need AI crammed into my desktop environment. And I surely don’t want it interjecting into my filesystem with my data. It is a privacy concern. And many of other people will feel the same or similarly as I do.

AI is a tool, and with all tools: use the appropriate tool for the job.

technocrit@lemmy.dbzer0.com on 13 Jun 15:39 next collapse

Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete.

lol no thanks.

edinbruh@feddit.it on 13 Jun 15:41 next collapse

AI has a lot of great uses, and a lot of stupid smoke and mirrors uses. For example, text to speech and live captioning or transcription are useful.

“Hypothetical AI desktop” “Siri” “copilot+” and other assistants are smoke and mirrors. Mainly because they don’t work. But if they did, they would be unreliable (because ai is unreliable) and would have to be limited to not cause issues. And so they would not be useful.

Plus, on Linux they would be especially unusefull, because there’s a million ways to do different things, and a million different setups. What if you asked the ai “change the screen resolution” and it started editing some gnome files while you are on KDE, or if it started mangling your xorg.conf because it’s heavily customized.

Plus, every openai stuff you are seeing this days doesn’t really work because it’s clever, it works because it’s huge. Chatgpt needs to be trained for days of week on specialized hardware, who’s gonna pay for all that in the open source community?

rtxn@lemmy.world on 13 Jun 16:20 next collapse

People who consciously use and support F/LOSS usually do it because they look at software with a very critical eye. They see the failures of proprietary software and choose to go the other way. That same critical view is why they are critical of most “AI” tools – there have been numerous failures attributed to AI, and precious little value that isn’t threatened by those failures.

WallEx@feddit.de on 13 Jun 16:38 next collapse

A lot of mentions of AI from companies is absolute marketing bullshit. And if you can’t see that you don’t want to.

[deleted] on 13 Jun 16:47 next collapse

.

soulfirethewolf@lemdro.id on 13 Jun 17:22 next collapse

I imagine it might happen one day. But at present, I don’t really think that most computers are at a point where they can utilize it without the use of proprietary cloud technologies that aren’t considered to be ethical nor financially sustainable. And even if people’s computers could fully handle things themselves, there would still need to be a group of developers with enough knowledge to actually implement it.

Consumer AI has always been pretty limited in most Linux desktops. Heck, I’m still waiting for a Desktop Environment to one day have a nice implementation of Speech-to-text like Windows and macOS.

soulfirethewolf@lemdro.id on 13 Jun 17:46 collapse

I feel like another part of it too is just that Linux users also just have higher expectations in areas around privacy, security, and flexibility, and lower expectations of elements like UX and Minimum Viable Product, the latter especially being that they don’t even view the software as a “product”.

A lot of AI features are powered by data collection in some way. And given that most Linux users don’t even like small amounts of telemetry being sent without their explicit permission, I couldn’t imagine how libre AI models could be built, especially on a shoestring budget, to produce something that would be capable of producing acceptable results. All without avoiding the heat that current AI companies are facing with plagiarism accusations and copyright infringement.

I’m not really saying it can’t happen, But it would require a larger organization like Mozilla, who’s actively working on building open source AI that could then be later incorporated by someone else (similar to the soon to be dead Mozilla location services being integrated through daemons used by desktop environments). Or, by a much more random guess, by a corporation with a profit incentive to incorporate Linux like Valve and the Steam Deck with its inclusion of the plasma Desktop via an Arch fork. And in the long run, the FOSS community building a larger developer base that actually could, And one day upstream it all once it’s in a good enough format.

KindaABigDyl@programming.dev on 13 Jun 17:34 next collapse

AI is mostly just hype. It’s the new blockchain

There are important AI technologies in the past for things like vision processing and the new generative AI has some uses like as a decent (although often inaccurate) summarizer/search engine. However, it’s also nothing revolutionary.

It’s just a neat peace of tech

But here come MS, Apple, other big companies, and tech bros to push AI hard, and it’s so obv that it’s all just a big scam to get more of your data and to lock down systems further or be the face of get-rich-quick schemes.

I mean the image you posted is a great example. Recall is a useless feature that also happens to store screenshots of everything you’ve been doing. You’re delusional if you think MS is actually going to keep that totally local. Both MS and the US government are going to have your entire history of using the computer, and that doesn’t sit right with FOSS people.

FOSS people tend to be rather technical than the average person, so they don’t fall for tech enthusiast nonsense as much.

lemmyvore@feddit.nl on 13 Jun 17:56 next collapse

You can’t do machine learning without tons of data and processing power.

Commercial “AI” has been built on fucking over everything that moves, on both counts. They suck power at alarming rates, especially given the state of the climate, and they blatantly ignore copyright and privacy.

FOSS tends to be based on a philosophy that’s strongly opposed to at least some of these methods. To start with, FOSS is build around respecting copyright and Microsoft is currently stealing GitHub code, anonymizing it, and offering it under their Copilot product, while explicitly promising companies who buy Copilot that they will insulate them from any legal downfall.

So yeah, some people in the “Linux space” are a bit annoyed about these things, to put it mildly.

Edit: but, to address your concerns, there’s nothing to be gained by rushing head-first into new technology. FOSS stands to gain nothing from early adoption. FOSS is a cultural movement not a commercial entity. When and if the technology will be practical and widely available it will be incorporated into FOSS. If it won’t be practical or will be proprietary, it won’t. There’s nothing personal about that.

moritz@l.deltaa.xyz on 13 Jun 19:08 next collapse

Great technology is invisible.

As long as AI is advertised as being a unique selling point, I’m not interested.

If you think of specific problems it is better to point them out and try think of solutions, not reject the technology as a whole.

Yes. There a problems with the Gnome desktop environment. Without looking at the issue tracker, I can assure you that AI is not the solution to any of them. Even if AI may be a possible solution to a problem, it would probably not be the best one.

bloodfart@lemmy.ml on 13 Jun 20:20 next collapse

Good.

The Luddites were right.

bigmclargehuge@lemmy.world on 13 Jun 20:21 next collapse

Sounds like something an AI would post. Quick, what color are your eyes?

MeetInPotatoes@lemmy.ml on 14 Jun 04:21 collapse

Potato colored in my case.

nyan@sh.itjust.works on 13 Jun 20:47 next collapse

Gnome and other desktops need to start working on integrating FOSS

In addition to everything everyone else has already said, why does this have anything to do with desktop environments at all? Remember, most open-source software comes from one or two individual programmers scratching a personal itch—not all of it is part of your DE, nor should it be. If someone writes an open-source LLM-driven program that does something useful to a significant segment of the Linux community, it will get packaged by at least some distros, accrete various front-ends in different toolkits, and so on.

However, I don’t think that day is coming soon. Most of the things “Apple Intelligence” seems to be intended to fuel are either useless or downright offputting to me, and I doubt I’m the only one—for instance, I don’t talk to my computer unless I’m cussing it out, and I’d rather it not understand that. My guess is that the first desktop-directed offering we see in Linux is going to be an image generator frontend, which I don’t need but can see use cases for even if usage of the generated images is restricted (see below).

Anyway, if this is your particular itch, you can scratch it—by paying someone to write the code for you (or starting a crowdfunding campaign for same), if you don’t know how to do it yourself. If this isn’t worth money or time to you, why should it be to anyone else? Linux isn’t in competition with the proprietary OSs in the way you seem to think.

As for why LLMs are so heavily disliked in the open-source community? There are three reasons:

  1. The fact that they give inaccurate responses, which can be hilarious, dangerous, or tedious depending on the question asked, but a lot of nontechnical people, including management at companies trying to incorporate “AI” into their products, don’t realize the answers can be dangerously innacurate.
  2. Disputes over the legality and morality of using scraped data in training sets.
  3. Disputes over who owns the copyright of LLM-generated code (and other materials, but especiallly code).

Item 1 can theoretically be solved by bigger and better AI models, but 2 and 3 can’t be. They have to be decided by the courts, and at an international level, too. We might even be talking treaty negotiations. I’d be surprised if that takes less than ten years. In the meanwhile, for instance, it’s very, very dangerous for any open-source project to accept a code patch written with the aid of an LLM—depending on the conclusion the courts come to, it might have to be torn out down the line, along with everything built on top of it. The inability to use LLM output for open source or commercial purposes without taking a big legal risk kneecaps the value of the applications. Unlike Apple or Microsoft, the Linux community can’t bribe enough judges to make the problems disappear.

HipsterTenZero@dormi.zone on 13 Jun 21:00 next collapse

General Ludd had some good points tho…

kenkenken@sh.itjust.works on 13 Jun 21:11 next collapse

Try to think less about “communities” and maybe you will be happy.

electric_nan@lemmy.ml on 13 Jun 21:47 next collapse

There are already a lot of open models and tools out there. I totally disagree that Linux distros or DEs should be looking to bake in AI features. People can run an LLM on their computer just like they run any other application.

DigDoug@lemmy.world on 13 Jun 22:49 next collapse

…this looks like it was written by a supervisor who has no idea what AI actually is, but desperately wants it shoehorned into the next project because it’s the latest buzzword.

Cort@lemmy.world on 14 Jun 01:05 next collapse

Guys we need AI on our blockchain web3.0 iot. Just imagine the synergy

LeFantome@programming.dev on 14 Jun 16:59 collapse

Here we have a straight-shooter with upper management written all over him

Rozauhtuno@lemmy.blahaj.zone on 14 Jun 07:53 next collapse

"I saw a new toy on tv, and I want it NOW!"

  • Basically how the technobro mind works.
crispy_kilt@feddit.de on 14 Jun 14:29 collapse

I see you’ve met my employer

UnfortunateShort@lemmy.world on 14 Jun 00:05 next collapse

Is there no electron wrapper around ChatGPT yet? Jeez we better hurry, imagine having to use your browser like… For pretty much everything else.

Goun@lemmy.ml on 14 Jun 06:54 collapse

I did not buy these gaming memory sticks for nothing, bring me more electron!

Spectacle8011@lemmy.comfysnug.space on 14 Jun 04:44 next collapse

Tech Enthusiasts: Everything in my house is wired to the Internet of Things! I control it all from my smartphone! My smart-house is bluetooth enabled and I can give it voice commands via alexa! I love the future!

Programmers / Engineers: The most recent piece of technology I own is a printer from 2004 and I keep a loaded gun ready to shoot it if it ever makes an unexpected noise.

lolcatnip@reddthat.com on 16 Jun 01:52 collapse

That doesn’t describe me or any other programmer I know.

Spectacle8011@lemmy.comfysnug.space on 16 Jun 05:35 collapse

It doesn’t describe me either, but I had nothing meaningful to contribute to the discussion.

epoch@lemmy.world on 14 Jun 04:57 next collapse

This article should be ignored.

Dirk@lemmy.ml on 14 Jun 05:38 next collapse

Whenever AI is mentioned lots of people in the Linux space immediately react negatively.

Because whenever AI is mentioned it usually isn’t even close to what AI meant.

kazaika@lemmy.world on 14 Jun 09:16 next collapse

Imo you immensely overestimate the capabilities of these models. What they show to the public are always hand picked situations even if they say they dont

foremanguy92_@lemmy.ml on 14 Jun 10:58 next collapse

You’re mid right, the when something AI based is announced this is really criticized by some people and there are almost right. When something new pops, like windows recall, it is certain that this “new” feature is really not what AI is capable, and asks really important questions about privacy. But you’re right on the fact that Linux should be a bit more interested on AI and tried to made it the right way! But for now there’s no really good use cases of AI inside a distro. LLMs are good but do not need to be linked to user activities. Image generators are great but do not need to be linked to user activities… As exemple when Windows tried Recall and failed. Apple iOS 18 wants to implement that, and this should be surely a success inside the Apple minded people. But here where FOSS, privacy and anti Big-Techs guys are the main people that’s absolutely sure that every for-profit “new AI” feature would be really hated. I’m not against this mind just giving facts

luciferofastora@lemmy.zip on 14 Jun 11:37 next collapse

The first problem, as with many things AI, is nailing down just what you mean with AI.

The second problem, as with many things Linux, is the question of shipping these things with the Desktop Environment / OS by default, given that not everybody wants or needs that and for those that don’t, it’s just useless bloat.

The third problem, as with many things FOSS or AI, is transparency, here particularly training. Would I have to train the models myself? If yes: How would I acquire training data that has quantity, quality and transparent control of sources? If no: What control do I have over the source material the pre-trained model I get uses?

The fourth problem is privacy. The tradeoff for a universal assistant is universal access, which requires universal trust. Even if it can only fetch information (read files, query the web), the automated web searches could expose private data to whatever search engine or websites it uses. Particularly in the wake of Recall, the idea of saying “Oh actually we want to do the same as Microsoft” would harm Linux adoption more than it would help.

The fifth problem is control. The more control you hand to machines, the more control their developers will have. This isn’t just about trusting the machines at that point, it’s about trusting the developers. To build something the caliber of full AI assistants, you’d need a ridiculous amount of volunteer efforts, particularly due to the splintering that always comes with such projects and the friction that creates. Alternatively, you’d need corporate contributions, and they always come with an expectation of profit. Hence we’re back to trust: Do you trust a corporation big enough to make a difference to contribute to such an endeavour without amy avenue of abuse? I don’t.


Linux has survived long enough despite not keeping up with every mainstream development. In fact, what drove me to Linux was precisely that it doesn’t do everything Microsoft does. The idea of volunteers (by and large unorganised) trying to match the sheer power of a megacorp (with a strict hierarchy for who calls the shots) in development power to produce such an assistant is ridiculous enough, but the suggestion that DEs should come with it already integrated? Hell no

One useful applications of “AI” (machine learning) I could see: Evaluating logs to detect recurring errors and cross-referencing them with other logs to see if there are correlations, which might help with troubleshooting.
That doesn’t need to be an integrated desktop assistant, it can just be a regular app.

Really, that applies to every possible AI tool. Make it an app, if you care enough. People can install it for themselves if they want. But for the love of the Machine God, don’t let the hype blind you to the issues.

jjlinux@lemmy.ml on 14 Jun 12:10 next collapse

That’s easy, move over to Windows or Mac and enjoy. I’ll stay in my dumb as Linux distros, thank you.

fluxion@lemmy.world on 14 Jun 16:32 collapse

The AI in my head is a bit underpowered but it gets the job done

jjlinux@lemmy.ml on 14 Jun 19:51 collapse

Same as mine. But mine also gets confused regularly, and it gets worse with every new version (age) 🤣🤣

Antiochus@lemmy.one on 14 Jun 12:39 next collapse

You’re getting a lot of flack in these comments, but you are absolutely right. All the concerns people have raised about “AI” and the recent wave of machine learning tech are (mostly) valid, but that doesn’t mean AI isn’t incredibly effective in certain use cases. Rather than hating on the technology or ignoring it, the FOSS community should try to find ways of implementing AI that mitigate the problems, while continuing to educate users about the limitations of LLMs, etc.

crispy_kilt@feddit.de on 14 Jun 14:27 next collapse

It’s spelled flak, not flack. It’s from the German word Flugabwehrkanone which literally means aerial defense cannon.

Antiochus@lemmy.one on 16 Jun 00:00 collapse

Oh, that’s very interesting. I knew about flak in the military context, but never realized it was the same word used in the idiom. The idiom actually makes a lot more sense now.

FatCat@lemmy.world on 14 Jun 20:55 collapse

One comment that agrees 🥲

groucho@lemmy.sdf.org on 14 Jun 17:53 next collapse

As someone whose employer is strongly pushing them to use AI assistants in coding: no. At best, it’s like being tied to a shitty intern that copies code off stack overflow and then blows me up on slack when it magically doesn’t work. I still don’t understand why everyone is so excited about them. The only tasks they can handle competently are tasks I can easily do on my own (and with a lot less re-typing.)

Sure, they’ll grow over the years, but Altman et al are complaining that they’re running out of training data. And even with an unlimited body of training data for future models, we’ll still end up with something about as intelligent as a kid that’s been locked in a windowless room with books their whole life and can either parrot opinions they’ve read or make shit up and hope you believe it. I’ll think we’ll get a series of incompetent products with increasing ability to make wrong shit up on the fly until C-suite moves on to the next shiny bullshit.

That’s not to say we’re not capable of creating a generally-intelligent system on par with or exceeding human intelligence, but I really don’t think LLMs will allow for that.

tl;dr: a lot of woo in the tech community that the linux community isn’t as on board with

Sims@lemmy.ml on 14 Jun 23:18 next collapse

I agree. However, I think it is related to Capitalism and all the sociopathic corporations out there. It’s almost impossible to think that anything good will come from the Blue Church controlling even more tech. Capitalism have always used any opportunity to enslave/extort people - that continues with AI under their control.

However, I was also disappointed when I found out how negative ‘my’ crowd were. I wanted to create an open source lowend AGI to secure poor people a descent life without being attacked by Capitalism every day/hour/second, create abundance, communities, production and and in general help build a social sub society in the midst of the insane blue church and their propagandized believers.

It is perfectly doable to fight the Capitalist religion with homegrown AI based on what we know and have today. But nobody can do it alone, and if there’s no-one willing to fight the f*ckers with AI, then it takes time…

I definitely intend to build a revolution-AGI to kill off the Capitalist religion and save exploited poor people. No matter what happens, there will be at least one AGI that are trained on revolution, anti-capitalism and building something much better than this effing blue nightmare. The worlds first aggressive ‘Commie-bot’ ha! 😍

737@lemmy.blahaj.zone on 18 Jun 01:55 collapse

I’ve yet to see a need for “AI integration ✨” in to the desktop experience. Copilot, LLM chat bots, TTS, OCR, and translation using machine learning are all interesting but I don’t think OS integration is beneficial.

FatCat@lemmy.world on 18 Jun 10:53 collapse

Time 💫 will ✨ prove 💫 you ✨ wrong. 💫

737@lemmy.blahaj.zone on 20 Jun 17:09 collapse

not every high tech product or idea makes it, you don’t see a lot of netbooks or wifi connected kitchen appliances these days either; having the ability to make tiny devices or connecting every single device is not justification enough to actually do it. i view ai integration similarly: having an llm in some side bar to change the screen brightness, find some time or switch the keyboard layout isn’t really useful. being able to select text in an image viewer or searching through audio and video for spoken words for example would be a useful application for machine learning in the DE, that isn’t really what’s advertised as “AI” though.

737@lemmy.blahaj.zone on 20 Jun 17:11 next collapse

i don’t really think anyone would be against the last two examples to be integrated in dolphin, nautilus, gwenview… either.

FatCat@lemmy.world on 20 Jun 21:29 collapse

Changing the brightness or WiFi settings can be very useful for many people. Not everyone is a Linux nerd and knows all the ins and outs of basic computing.

737@lemmy.blahaj.zone on 21 Jun 08:48 collapse

maybe, but these people wouldn’t own a pc with a dedicated gpu or neutral network accelerator.