Someone made a GPT-like chatbot that runs locally on Raspberry Pi, and you can too (www.xda-developers.com)
from throws_lemy@lemmy.nz to technology@lemmy.world on 21 Feb 2024 15:55
https://lemmy.nz/post/7139716

#technology

threaded - newest

QuadratureSurfer@lemmy.world on 21 Feb 2024 17:33 next collapse

Direct link to the GitHub repo:
github.com/nickbild/local_llm_assistant?tab=readm…

It’s a small model by comparison. If you want something that’s offline and actually closer to comparing to ChatGPT 3.5, you’ll want the Mixtral 8x7B model instead (running on a beefy machine):

mistral.ai/news/mixtral-of-experts/

mesamunefire@lemmy.world on 21 Feb 2024 17:55 next collapse

Nice! Thats a cool project, ill have to give it a try. I love the idea of self hosting local LLMs. Ive been playing around with: lmstudio.ai and it directly downloads from hugging face.

mtw@lemmy.blahaj.zone on 23 Feb 2024 00:22 collapse

There’s also ollama which seems to be similar. Not sure if LMStudio is open source but ollama is.

Deceptichum@kbin.social on 21 Feb 2024 19:14 next collapse

Sick, I only need 90gb of VRAM!

QuadratureSurfer@lemmy.world on 21 Feb 2024 19:45 next collapse

I’ve got it running with a 3090 and 32GB of RAM.

There are some models that let you run with hybrid system RAM and VRAM (it will just be slower than running it exclusively with VRAM).

Deceptichum@kbin.social on 21 Feb 2024 19:52 collapse

Yeah but damn does it get slow.

I always find it interesting how text is so much slower than image generation. I can do a 1024x1024 in probably 20s, but I get like 1 word a second with text.

aBundleOfFerrets@sh.itjust.works on 22 Feb 2024 03:01 collapse

Languages are complex and, more importantly, much less forgiving to error

DarkThoughts@fedia.io on 21 Feb 2024 22:05 collapse

Hopefully we see more specific hardware for this. Like extension cards with pretty much just tensor cores and their own ram.

Deceptichum@kbin.social on 21 Feb 2024 23:12 next collapse

I’d love to see some consumer level AI stuff, sadly it all seems to be designed for server farms and by the time it ages out into consumer prices it’s so obsolete there’s no point in getting it.

DarkThoughts@fedia.io on 22 Feb 2024 00:06 next collapse

It's not quite consumer level I'd say but Coral.ai has some small Google Edge based TPUs.

raldone01@lemmy.world on 24 Feb 2024 20:00 collapse

Do they want consumer ai cards to exist though?

Think about the data!

Deceptichum@kbin.social on 24 Feb 2024 20:52 collapse

Card makers? They only want money, if theres enough consumer level demand they will make them.

raldone01@lemmy.world on 24 Feb 2024 21:42 collapse

I guess your right.

topinambour_rex@lemmy.world on 22 Feb 2024 11:56 collapse

Graphic cards without video connection exists since a while.

DarkThoughts@fedia.io on 21 Feb 2024 22:16 collapse

I tried llamafile for text gen too but I couldn't get ROCm to properly work with it to run it through my GPU without having to build it myself, which I'm really not into. And CPU text gen is waaaaaay too slow for anything. Mixtral response was like ~250 seconds or so for ~1k context tokens, I think Mistral was about 52 seconds or something around that number.

https://github.com/Mozilla-Ocho/llamafile
Mixtral is definitely beefy, Mistral is quite a bit faster and there's a few even smaller prebuilt ones. But the smaller you go the less complex the responses will be.
I think llamafile is a good step in the right direction though, but it's still not a good out of the box experience yet. At least I got farther with it than with oobabooga, which is the recommendation for SillyTavern, which would just crash whenever it generated anything without even giving me an error.

Flumpkin@slrpnk.net on 22 Feb 2024 00:45 collapse

How fast are they with a good GPU?

DarkThoughts@fedia.io on 22 Feb 2024 10:47 collapse

Have you missed the first part where I explained that I couldn't get it to run through my GPU? I would only have a 6650 XT anyway but even that would be significantly faster than my CPU. How far I can't say exactly without experiencing it though, but I suspect with longer chats and consequently larger context sizes it would still be too slow to be really usable. Unless you're okay waiting for ages for a response.

Flumpkin@slrpnk.net on 22 Feb 2024 12:05 collapse

Sorry, I’m just curious in general how fast these local LLMs are. Maybe someone else can give some rough info.

HotsauceHurricane@lemmy.one on 21 Feb 2024 19:44 next collapse

*cannot function correctly without T-Mobile speaker.

melroy@kbin.melroy.org on 21 Feb 2024 23:37 collapse

I cannot function with T-Mobile internet, that is for sure. I'm moving to another ISP

anticurrent@sh.itjust.works on 21 Feb 2024 20:37 next collapse

Can we have smaller more domain specific models. that shouldn’t require more than casual hardware. like a small model for coding, one for medicine, one for history, and so on. ???

fruitycoder@sh.itjust.works on 21 Feb 2024 21:07 collapse

Check out hugging face! Honestly fine tunned models for specific domains seems very popular (if for nothing else because training smaller models is just easier!).

DarkThoughts@fedia.io on 21 Feb 2024 22:03 collapse

Unfortunately the roleplaying chatbot type models are typically fairly sizeable / demanding. I'm curious how this will develop with more specific AI hardware though, like extension cards with primarily tensor cores + their own ram, so that you don't have to use your GPU for that. If we can drag down the price for such hardware then locally run models could become much more viable and mainstream.

Pantherina@feddit.de on 21 Feb 2024 22:42 collapse

Dude sorry to say but roleplay is not equally important as medicine or coding XD

DarkThoughts@fedia.io on 21 Feb 2024 22:49 collapse

For me they are. I have no use for medicine or coding bots.

long_chicken_boat@sh.itjust.works on 21 Feb 2024 23:32 collapse

but you have the use for the very software you’re using daily or medicine developments.

I play D&D from time to time, but saying that roleplaying is more important than medicine is just nuts.

Pantherina@feddit.de on 21 Feb 2024 23:39 next collapse

Not wanting to be mean, I just find the thought of people talking to robots a bit strange, and use them as tools only. Not sure what “roleplay” means, if it is some “fantasy DND generator” still you could say this may be better done by humans to keep that grey matter running.

DarkThoughts@fedia.io on 22 Feb 2024 00:00 collapse

Not so much for the latter but I'm pretty specifically talking about my personal use case here. lol
"Roleplaying" in this scenario isn't really referring to actual tabletop type RPGs btw. It's the LLM roleplaying specific characters or personas that you then chat with in specific (or not so specific) scenarios. Although that same tech is also experimented with to be used in video games for NPCs.
But who knows. A specifically trained model could potentially make a half decent dungeon master too.

Kilnier@lemmy.ca on 22 Feb 2024 00:41 collapse

There also a huge amount of training, medical and otherwise, that’s done through role-playing. I could definitely see medical students getting use out of learning telemedicine with LLMs that were ultimately adapted from TTRPGs character generator schemas.

Coreidan@lemmy.world on 22 Feb 2024 01:49 collapse

That’s gonna be a no from me dawg