While I definitely do not want a LLM (especially not Open AI or whatever) to have access to my terminal or other stuff on my PC, and in general don’t have any use for that, I find it cool that something like this is available now.
Remember, it’s totally optional and nobody forces you to download that stuff. You have the choice to ignore it, and that’s the great thing about Linux!
I look forward to not installing it.
From the title I thought Gnome foundation made a Ai Client for a sec, Until I read the article.
Idk why people don’t read the article before commenting.
Newelle supports interfacing with the Google Gemini API, the OpenAI API, Groq, and also local large language models (LLMs) or ollama instances for powering this AI assistant.
So you configure it with your prefered model which can include a locally run one. And it seems to be its own package not something built into gnome itself so you an easily uninstall it if you won’t use it.
Seems fine to me. I probably won’t be using it, but it’s an interesting idea. Being able to run terminal commands seems risky though. What if the AI bricks my system? Hopefully they make you confirm every command before it runs any of them or something.
What I’d like to see which is unclear if it would support is a LAN model. I have run ollama models on a desktop, and remotely interfaced with them via ssh before from another computer on the same network. This would be ideal since you can have your own local model on your own network, put it on a powerful, but energy efficient home server, and let it interface with all devices on your network. Rather than each one running their own local model, or using a corporate model.
Yep, the OpenAI api and/or the ollama one work for this no problem in most projects. You just give it the address and port you want to connect to, and that port can be localhost, lan, another server on another network, whatever.
Works with Ollama, neat!
I haven’t tested this but TBH as someone who has run Linux at home for 25 years I love the idea of an always alert sysadmin keeping my machine maintained and configured to my specs. Keep my IDS up to date. And so on.
Two requirements:
1 Be an open source local model with no telemetry
2 Let me review proposed changes to my system and explain why they should be made
- That is not what this does
- You can certainly have unattended updates without an LLM in the mix.
Like what do you need to keep configured? lol Linux is set it and forget it. I’ve had installs be fine from day one to year 7. It’s not like windows where Microsoft is constantly changing things and changing your settings. Like it takes minimum effort to keep a Linux server/system going after initial configuration.
You could use AI for self-healing network infrastructure, but in the context of what this tool would do, I’m struggling. You could monitor logs or IDS/IPS, but you’d really just be replacing a solution that already exists (SNMP). And yeah, SNMP isn’t going to be pattern matching, but your IDS would already be doing that. You don’t need your traffic pattern matching system pattern matched by AI.
Do you use IDCS? If not, why not? Have you taken care of automating encryption and backup to cloud? There’s a new open source shared media server, are you interested in configuring, securing, and testing it?
It’s mostly set and forget, Earth is mostly harmless, etc
For some reason, these local LLMS are straight up stupid. I tried deepseek R1 through ollama and it was straight up stupid and gave everything wrong. Anyone got the same results? I did the 7b and 14b (if I remember these numbers correctly), 32 straight up didn’t install because I didn’t have enough RAM.
Did you use a heavily quantized version? Those models are much smaller than the state of the art ones to begin with, and if you chop their weights from float16 to float2 or something it reduces their capabilities a lot more
I had more success with Qwen3,But it still does small mistakes(like for me I asked It to compare Gstreamer and ffmpeg it got the licensing wrong)
I’ve had good experience with smollm2:135m. The test case I used was determining why an HTTP request from one system was not received by another system. In total, there are 10 DB tables it must examine not only for logging but for configuration to understand if/how the request should be processed or blocked. Some of those were mapping tables designed such that table B must be used to join table A to table C, table D must be used to join table C to table E. Therefore I have a path to traverse a complete configuration set (table A <-> table E).
I had to describe each field being pulled (~150 fields total), but it was able to determine the correct reason for the request failure. The only issue I’ve had was a separate incident using a different LLM when I tried to use AI to generate golang template code for a database library I was wanting to use. It didn’t use it and recommended a different library. When instructed that it must use this specific library, it refused (politely). That caught me off-guard. I shouldn’t have to create a scenario where the AI goes to jail if it fails to use something. I should just have to provide the instruction and, if that instruction is reasonable, await output.
The performance is relative to the user. Could it be that you’re a god damned genius? :/
Big nope from me dawg
holy shit, no thank you
While I definitely do not want a LLM (especially not Open AI or whatever) to have access to my terminal or other stuff on my PC, and in general don’t have any use for that, I find it cool that something like this is available now.
Remember, it’s totally optional and nobody forces you to download that stuff. You have the choice to ignore it, and that’s the great thing about Linux!
Or, ORRRR…just do the stuff yourself and don’t further perpetuate this dumbshit until it doesn’t require an entire months worth of energy for an efficient home to run to search “Hentai Alien Tentacle Porn” for you.
Buncha savages.
search “Hentai Alien Tentacle Porn” for you
This is suspiciously specific 🙂
It’s clearly what most Linux users that would use “AI” would be searching.
it doesn’t use that much energy