ExtremeDullard@lemmy.sdf.org
on 02 Aug 19:29
nextcollapse
I look forward to not installing it.
ElcaineVolta@kbin.melroy.org
on 02 Aug 19:36
nextcollapse
holy shit, no thank you
just_another_person@lemmy.world
on 02 Aug 19:43
nextcollapse
Or, ORRRR…just do the stuff yourself and don’t further perpetuate this dumbshit until it doesn’t require an entire months worth of energy for an efficient home to run to search “Hentai Alien Tentacle Porn” for you.
Buncha savages.
ExtremeDullard@lemmy.sdf.org
on 02 Aug 20:08
nextcollapse
search “Hentai Alien Tentacle Porn” for you
This is suspiciously specific 🙂
just_another_person@lemmy.world
on 02 Aug 20:14
collapse
It’s clearly what most Linux users that would use “AI” would be searching.
jumping_redditor@sh.itjust.works
on 03 Aug 00:34
collapse
it doesn’t use that much energy
nymnympseudonym@lemmy.world
on 02 Aug 19:46
nextcollapse
I haven’t tested this but TBH as someone who has run Linux at home for 25 years I love the idea of an always alert sysadmin keeping my machine maintained and configured to my specs. Keep my IDS up to date. And so on.
Two requirements:
1 Be an open source local model with no telemetry
2 Let me review proposed changes to my system and explain why they should be made
just_another_person@lemmy.world
on 02 Aug 20:04
nextcollapse
That is not what this does
You can certainly have unattended updates without an LLM in the mix.
ExtremeDullard@lemmy.sdf.org
on 02 Aug 20:04
nextcollapse
Even if it was open source (it isn’t, because no model is really open source ultimately) and even if it let you review what it says it’s gonna do, AI is known for pulling all kinds of shit and lie about it.
Would you really trust your system to something that can do this? I wouldn't...
I wouldn't trust a Sales team member with database permissions, either. This is why we have access control in sysadmin. That AI had permission to operate as the user in Replit's cloud environment. Not a separate restricted user, but as that user and without sandboxing. That should never happen. So, if I were managing that environment I would have to ask the question: is it the AI's fault for breaking it or is it my fault for allowing the AI to break it?
AI is known for pulling all kinds of shit and lie about it.
So are interns. I don't think you can hate the tool for it being misused, but you certainly can hate the user for allowing it.
nymnympseudonym@lemmy.world
on 02 Aug 23:45
collapse
Have you checked Mistral? Open weights and training set. What more do you want?
Like what do you need to keep configured? lol Linux is set it and forget it. I’ve had installs be fine from day one to year 7. It’s not like windows where Microsoft is constantly changing things and changing your settings. Like it takes minimum effort to keep a Linux server/system going after initial configuration.
You could use AI for self-healing network infrastructure, but in the context of what this tool would do, I'm struggling. You could monitor logs or IDS/IPS, but you'd really just be replacing a solution that already exists (SNMP). And yeah, SNMP isn't going to be pattern matching, but your IDS would already be doing that. You don't need your traffic pattern matching system pattern matched by AI.
nymnympseudonym@lemmy.world
on 03 Aug 19:27
collapse
Do you use IDCS? If not, why not?
Have you taken care of automating encryption and backup to cloud?
There’s a new open source shared media server, are you interested in configuring, securing, and testing it?
It’s mostly set and forget, Earth is mostly harmless, etc
Guenther_Amanita@slrpnk.net
on 02 Aug 20:06
nextcollapse
While I definitely do not want a LLM (especially not Open AI or whatever) to have access to my terminal or other stuff on my PC, and in general don’t have any use for that, I find it cool that something like this is available now.
Remember, it’s totally optional and nobody forces you to download that stuff. You have the choice to ignore it, and that’s the great thing about Linux!
Xiisadaddy@lemmygrad.ml
on 02 Aug 20:08
nextcollapse
Idk why people don’t read the article before commenting.
Newelle supports interfacing with the Google Gemini API, the OpenAI API, Groq, and also local large language models (LLMs) or ollama instances for powering this AI assistant.
So you configure it with your prefered model which can include a locally run one. And it seems to be its own package not something built into gnome itself so you an easily uninstall it if you won’t use it.
Seems fine to me. I probably won’t be using it, but it’s an interesting idea. Being able to run terminal commands seems risky though. What if the AI bricks my system? Hopefully they make you confirm every command before it runs any of them or something.
What I’d like to see which is unclear if it would support is a LAN model. I have run ollama models on a desktop, and remotely interfaced with them via ssh before from another computer on the same network. This would be ideal since you can have your own local model on your own network, put it on a powerful, but energy efficient home server, and let it interface with all devices on your network. Rather than each one running their own local model, or using a corporate model.
Yep, the OpenAI api and/or the ollama one work for this no problem in most projects. You just give it the address and port you want to connect to, and that port can be localhost, lan, another server on another network, whatever.
Guenther_Amanita@slrpnk.net
on 02 Aug 20:08
nextcollapse
While I definitely do not want a LLM (especially not Open AI or whatever) to have access to my terminal or other stuff on my PC, and in general don’t have any use for that, I find it cool that something like this is available now.
Remember, it’s totally optional and nobody forces you to download that stuff. You have the choice to ignore it, and that’s the great thing about Linux!
chicagohuman@lemmy.zip
on 02 Aug 21:34
nextcollapse
Works with Ollama, neat!
shiroininja@lemmy.world
on 03 Aug 01:15
nextcollapse
Big nope from me dawg
DonutsRMeh@lemmy.world
on 03 Aug 01:20
nextcollapse
For some reason, these local LLMS are straight up stupid. I tried deepseek R1 through ollama and it was straight up stupid and gave everything wrong. Anyone got the same results? I did the 7b and 14b (if I remember these numbers correctly), 32 straight up didn’t install because I didn’t have enough RAM.
Did you use a heavily quantized version? Those models are much smaller than the state of the art ones to begin with, and if you chop their weights from float16 to float2 or something it reduces their capabilities a lot more
I've had good experience with smollm2:135m. The test case I used was determining why an HTTP request from one system was not received by another system. In total, there are 10 DB tables it must examine not only for logging but for configuration to understand if/how the request should be processed or blocked. Some of those were mapping tables designed such that table B must be used to join table A to table C, table D must be used to join table C to table E. Therefore I have a path to traverse a complete configuration set (table A <-> table E).
I had to describe each field being pulled (~150 fields total), but it was able to determine the correct reason for the request failure.
The only issue I've had was a separate incident using a different LLM when I tried to use AI to generate golang template code for a database library I was wanting to use.
It didn't use it and recommended a different library.
When instructed that it must use this specific library, it refused (politely).
That caught me off-guard. I shouldn't have to create a scenario where the AI goes to jail if it fails to use something.
I should just have to provide the instruction and, if that instruction is reasonable, await output.
I had more success with Qwen3 14b/8b,But it still does small mistakes(like for me I asked It to compare Gstreamer and ffmpeg it got the licensing wrong)
threaded - newest
I look forward to not installing it.
holy shit, no thank you
Or, ORRRR…just do the stuff yourself and don’t further perpetuate this dumbshit until it doesn’t require an entire months worth of energy for an efficient home to run to search “Hentai Alien Tentacle Porn” for you.
Buncha savages.
This is suspiciously specific 🙂
It’s clearly what most Linux users that would use “AI” would be searching.
it doesn’t use that much energy
I haven’t tested this but TBH as someone who has run Linux at home for 25 years I love the idea of an always alert sysadmin keeping my machine maintained and configured to my specs. Keep my IDS up to date. And so on.
Two requirements:
1 Be an open source local model with no telemetry
2 Let me review proposed changes to my system and explain why they should be made
Even if it was open source (it isn’t, because no model is really open source ultimately) and even if it let you review what it says it’s gonna do, AI is known for pulling all kinds of shit and lie about it.
Would you really trust your system to something that can do this? I wouldn’t…
I wouldn't trust a Sales team member with database permissions, either. This is why we have access control in sysadmin. That AI had permission to operate as the user in Replit's cloud environment. Not a separate restricted user, but as that user and without sandboxing. That should never happen. So, if I were managing that environment I would have to ask the question: is it the AI's fault for breaking it or is it my fault for allowing the AI to break it?
So are interns. I don't think you can hate the tool for it being misused, but you certainly can hate the user for allowing it.
Have you checked Mistral? Open weights and training set. What more do you want?
Like what do you need to keep configured? lol Linux is set it and forget it. I’ve had installs be fine from day one to year 7. It’s not like windows where Microsoft is constantly changing things and changing your settings. Like it takes minimum effort to keep a Linux server/system going after initial configuration.
You could use AI for self-healing network infrastructure, but in the context of what this tool would do, I'm struggling. You could monitor logs or IDS/IPS, but you'd really just be replacing a solution that already exists (SNMP). And yeah, SNMP isn't going to be pattern matching, but your IDS would already be doing that. You don't need your traffic pattern matching system pattern matched by AI.
Do you use IDCS? If not, why not? Have you taken care of automating encryption and backup to cloud? There’s a new open source shared media server, are you interested in configuring, securing, and testing it?
It’s mostly set and forget, Earth is mostly harmless, etc
While I definitely do not want a LLM (especially not Open AI or whatever) to have access to my terminal or other stuff on my PC, and in general don’t have any use for that, I find it cool that something like this is available now.
Remember, it’s totally optional and nobody forces you to download that stuff. You have the choice to ignore it, and that’s the great thing about Linux!
Idk why people don’t read the article before commenting.
So you configure it with your prefered model which can include a locally run one. And it seems to be its own package not something built into gnome itself so you an easily uninstall it if you won’t use it.
Seems fine to me. I probably won’t be using it, but it’s an interesting idea. Being able to run terminal commands seems risky though. What if the AI bricks my system? Hopefully they make you confirm every command before it runs any of them or something.
What I’d like to see which is unclear if it would support is a LAN model. I have run ollama models on a desktop, and remotely interfaced with them via ssh before from another computer on the same network. This would be ideal since you can have your own local model on your own network, put it on a powerful, but energy efficient home server, and let it interface with all devices on your network. Rather than each one running their own local model, or using a corporate model.
Yep, the OpenAI api and/or the ollama one work for this no problem in most projects. You just give it the address and port you want to connect to, and that port can be localhost, lan, another server on another network, whatever.
While I definitely do not want a LLM (especially not Open AI or whatever) to have access to my terminal or other stuff on my PC, and in general don’t have any use for that, I find it cool that something like this is available now.
Remember, it’s totally optional and nobody forces you to download that stuff. You have the choice to ignore it, and that’s the great thing about Linux!
Works with Ollama, neat!
Big nope from me dawg
For some reason, these local LLMS are straight up stupid. I tried deepseek R1 through ollama and it was straight up stupid and gave everything wrong. Anyone got the same results? I did the 7b and 14b (if I remember these numbers correctly), 32 straight up didn’t install because I didn’t have enough RAM.
Did you use a heavily quantized version? Those models are much smaller than the state of the art ones to begin with, and if you chop their weights from float16 to float2 or something it reduces their capabilities a lot more
I've had good experience with smollm2:135m. The test case I used was determining why an HTTP request from one system was not received by another system. In total, there are 10 DB tables it must examine not only for logging but for configuration to understand if/how the request should be processed or blocked. Some of those were mapping tables designed such that table B must be used to join table A to table C, table D must be used to join table C to table E. Therefore I have a path to traverse a complete configuration set (table A <-> table E).
I had to describe each field being pulled (~150 fields total), but it was able to determine the correct reason for the request failure.
The only issue I've had was a separate incident using a different LLM when I tried to use AI to generate golang template code for a database library I was wanting to use.
It didn't use it and recommended a different library.
When instructed that it must use this specific library, it refused (politely).
That caught me off-guard. I shouldn't have to create a scenario where the AI goes to jail if it fails to use something.
I should just have to provide the instruction and, if that instruction is reasonable, await output.
The performance is relative to the user. Could it be that you’re a god damned genius? :/
I had more success with Qwen3 14b/8b,But it still does small mistakes(like for me I asked It to compare Gstreamer and ffmpeg it got the licensing wrong)
From the title I thought Gnome foundation made a Ai Client for a sec, Until I read the article.