Quote: “all running locally, without needing an internet connection once the model is loaded. Experiment with different models, chat, ask questions with images, explore prompts, and more!”
So you can download it and set the device to airplane mode, never go online again - they won’t be able to monitor anything, even if there’s code for that included.
taladar@sh.itjust.works
on 31 May 17:56
nextcollapse
I won't gonna use my smartphone as a local llm machine.
GreenKnight23@lemmy.world
on 31 May 17:40
collapse
everything is unmonitored if you don’t connect to the network.
flightyhobler@lemmy.world
on 31 May 19:21
collapse
But not everything works in those conditions.
GreenKnight23@lemmy.world
on 31 May 23:12
collapse
it does if you make it work in those conditions.
software that “phones home” is easy to fool.
Deckname@discuss.tchncs.de
on 01 Jun 08:08
collapse
Just firewall the software or is there anything more fancy i would need to do?
GreenKnight23@lemmy.world
on 01 Jun 15:46
collapse
typically the phone home is looking for a response to unlock.
use a packet sniffer to see what the request/response is and replicate it with a proxy or response server.
this is also know as a man-in-the-middle (mitm).
takes skill and knowledge to do, but once you do a few dozen it’s pretty easy since most software “phone homes” are looking for static non-encrypted responses.
How is Ollama compared to GPT models? I used the paid tier for work and I’m curious how this stacks up.
AmbiguousProps@lemmy.today
on 01 Jun 17:00
collapse
It’s decent, with the deepseek model anyway. It’s not as fast and has a lower parameter count though. You might just need to try it and see if it fits your needs or not.
Nice! I saw Mozilla also added an ai chat in the browser recently (not in the phone version that I have seen tho).
It is too bad duck.ai only runs the small models. Gpt4o-mini is not very good, it can be very inaccurate and very inconsistent :(
I would like to see the 4.1-mini instead, faster and better and got function calling, so it can do web searches for example. O3 can’t so it can only know what it knows until 2023.
But thanks for the information I will be looking out for when 4.1 is added!
I’ve been using duck.ai recently myself and quite like it. My only complaint with it is that the chats have a length limit, so if you’re working on complex projects you can run into those limits pretty quick. I use it for worldbuilding for a novel I’m working on and I have to use chatgpt for thematic stuff because it has a better memory, but otherwise it’s great for quick/small things.
RizzoTheSmall@lemm.ee
on 01 Jun 17:51
nextcollapse
All the time I spent trying to get rid of gemini just to now download this. Am I stupid?
JustARegularNerd@lemmy.dbzer0.com
on 02 Jun 02:15
collapse
I wouldn’t think so - it depends on your priorities.
The open source and offline nature of this without the pretenses of “Hey, we’re gonna use every query you give as a data point to shove more products down your face” seems very appealing over Gemini. There’s also that Gemini is constantly being shoved in our faces and preinstalled, whereas this is a completely optional download.
threaded - newest
Alibaba also provides an OpenSource App, it even has support for their multimodal voice chat Model qwen2.5 omni: github.com/alibaba/MNN
Is the chat uncensored?
And unmonitored? Don't trust anything from Google anymore.
What makes this better than Ollama?
Quote: “all running locally, without needing an internet connection once the model is loaded. Experiment with different models, chat, ask questions with images, explore prompts, and more!”
So you can download it and set the device to airplane mode, never go online again - they won’t be able to monitor anything, even if there’s code for that included.
That is exactly what Ollama does too.
Sounds counter-intuitive on a smart phone where you most likely want to be online again at some point in time.
So trust them. If you don’t and want to use this, buy a separate device for it, or VM.
Can’t? This is not for you.
I won't gonna use my smartphone as a local llm machine.
everything is unmonitored if you don’t connect to the network.
But not everything works in those conditions.
it does if you make it work in those conditions.
software that “phones home” is easy to fool.
Just firewall the software or is there anything more fancy i would need to do?
typically the phone home is looking for a response to unlock.
use a packet sniffer to see what the request/response is and replicate it with a proxy or response server.
this is also know as a man-in-the-middle (mitm).
takes skill and knowledge to do, but once you do a few dozen it’s pretty easy since most software “phone homes” are looking for static non-encrypted responses.
Censoring is model dependent so you can select one of the models without the guardrails.
Why would I use this over Ollama?
Ollama can’t run on Android
That’s fair, but I think I’d rather self host an Ollama server and connect to it with an Android client in that case. Much better performance.
Yes, that’s my setup. But this will be useful for cases where internet connection is not reliable
How is Ollama compared to GPT models? I used the paid tier for work and I’m curious how this stacks up.
It’s decent, with the deepseek model anyway. It’s not as fast and has a lower parameter count though. You might just need to try it and see if it fits your needs or not.
You can use it in termux
Has this actually been done? If so, I assume it would only be able to use the CPU
Yeah I have it in termux. Ollama is in the package repos for termux. The speed it generates does feel like cpu speed but idk
Is there any useful model you can run on a phone?
Try PocketPal instead
Llama.cpp (on which ollama runs on) can. And many chat programs for phones can use it.
Enclave on iOS does the trick for the rare times i need a local LLM
Didn’t know about this. Checking it out now, thanks!
Duck.ai doesn’t data mine, and has o3 mini which I have found to be very good. Its got some extra functionality like lines to break up text.
Yeah duck is all over bothered with since it came out since you don’t even need to login to use it.
Nice! I saw Mozilla also added an ai chat in the browser recently (not in the phone version that I have seen tho).
It is too bad duck.ai only runs the small models. Gpt4o-mini is not very good, it can be very inaccurate and very inconsistent :( I would like to see the 4.1-mini instead, faster and better and got function calling, so it can do web searches for example. O3 can’t so it can only know what it knows until 2023.
But thanks for the information I will be looking out for when 4.1 is added!
I’ve been using duck.ai recently myself and quite like it. My only complaint with it is that the chats have a length limit, so if you’re working on complex projects you can run into those limits pretty quick. I use it for worldbuilding for a novel I’m working on and I have to use chatgpt for thematic stuff because it has a better memory, but otherwise it’s great for quick/small things.
You never heard of ollama or docker model runner?
Android and iOS.
Excellent, I will be sure not to use this, like all Google shit.
In a few years you won’t be able to anyway
I’m just reaching the end game faster then.
All the time I spent trying to get rid of gemini just to now download this. Am I stupid?
I wouldn’t think so - it depends on your priorities.
The open source and offline nature of this without the pretenses of “Hey, we’re gonna use every query you give as a data point to shove more products down your face” seems very appealing over Gemini. There’s also that Gemini is constantly being shoved in our faces and preinstalled, whereas this is a completely optional download.