1-bit LLMs Could Solve AI’s Energy Demands (spectrum.ieee.org)
from floofloof@lemmy.ca to technology@lemmy.world on 01 Jun 2024 01:34
https://lemmy.ca/post/22343325

#technology

threaded - newest

Brunbrun6766@lemmy.world on 01 Jun 2024 02:12 next collapse

Know what uses less? No LLMs

sugar_in_your_tea@sh.itjust.works on 01 Jun 2024 04:12 collapse

Yay, I’m doing my part!

Naz@sh.itjust.works on 01 Jun 2024 02:40 next collapse

Try using a 1-bit LLM to test the article’s claim.

The perplexity loss is staggering. It’s like 75% accuracy lost or more. It turns a 30 billion parameter model into a 7 billion parameter model.

Highly recommended that you try to replicate their results.

Hupf@feddit.de on 01 Jun 2024 05:54 next collapse
davidgro@lemmy.world on 01 Jun 2024 06:15 next collapse

But since it takes 10% of the space (vram, etc.) sounds like they could just start with a larger model and still come out ahead

Zos_Kia@lemmynsfw.com on 01 Jun 2024 07:30 next collapse

There is some research being done with fine tuning 1-bit quants, and they seem pretty responsive to it. Of course you’ll never get a full generalist model out of it, but there’s some hope for tiny specialized models that can run on CPU for a fraction of the energy bill.

The big models are great marketing because their verbal output is believable, but they’re grossly overkill for most tasks.

kromem@lemmy.world on 01 Jun 2024 21:48 collapse

There’s actually a perplexity improvement parameter-to-paramater for BitNet-1.58 which increases as it scales up.

So yes, post-training quantization perplexity issues are apparent, but if you train quantization in from the start it is better than FP.

Which makes sense through the lens of the superposition hypothesis where the weights are actually representing a hyperdimensional virtual vector space. If the weights have too much precision competing features might compromise on fuzzier representations instead of restructuring the virtual network to better matching nodes.

Constrained weight precision is probably going to be the future of pretraining within a generation or two looking at the data so far.

potatopotato@sh.itjust.works on 01 Jun 2024 03:17 next collapse

Making ai more efficient will just mean more ai

pearsaltchocolatebar@discuss.online on 01 Jun 2024 05:05 next collapse

Generative AI is great if used as a tool instead of a solution.

FaceDeer@fedia.io on 01 Jun 2024 05:46 next collapse

Since I find AIs to be useful that sounds fine to me.

0xD@infosec.pub on 01 Jun 2024 17:03 collapse

So?

[deleted] on 01 Jun 2024 03:40 next collapse

.

tal@lemmy.today on 01 Jun 2024 05:56 next collapse

So, first, that’s just a reduction. But set that aside, and let’s talk big picture here.

My GPU can use something like 400 watts.

A human is about 100 watts constant power consumption.

So even setting aside all other costs of a human and only paying attention to direct energy costs, if an LLM running on my GPU can do something in under a quarter the time I can, then it’s more energy-efficient.

I won’t say that that’s true for all things, but there are definitely things that Stable Diffusion or the like can do today in a whole lot less than a quarter the time it would take me.

Traister101@lemmy.today on 01 Jun 2024 07:02 next collapse

Chat GPT can output an article in a much shorter time than it’d take me to write one but people would probably like mine more

wischi@programming.dev on 01 Jun 2024 07:48 next collapse

The problem is that using those tools no matter how energy efficient will add to the total amount of energy humans use, because even if an AI generates an image faster than a human could, the human still needs 100W constantly.

This doesn’t mean, that we shouldn’t make it more efficient but let’s be honest, more energy efficient AI just means that we would use even more AI everywhere.

derpgon@programming.dev on 01 Jun 2024 09:29 next collapse

But speaking of efficiency, a human can do more useful tasks while AI is crunching numbers. But that is very subjective.

wischi@programming.dev on 02 Jun 18:42 collapse

It depends what you mean by useful. Most humans are (at least at the moment) more versatile than even the most advanced AI we have. But you have to keep in mind that there are jobs with pretty mundane tasks where you don’t really need the intelligence and versatility of a human.

derpgon@programming.dev on 02 Jun 21:52 collapse

Thats what I meant, keep the tasks separated, and let both what they do better than the other half.

RmDebArc_5@sh.itjust.works on 01 Jun 2024 13:06 collapse

Solution: remove human

That’s what a lot of news sites are doing, getting rid of large parts of the employees and having the remaining do the same work with LLM. If you burn the no longer needed employees as an alternative heating solution your energy usage drops effectively to zero

wischi@programming.dev on 02 Jun 18:37 collapse

True, but It’s still not what I meant unless they kill those humans. The employees that did that work before still need the 100W. It might be that they can now do something else (or just be unemployed) but the net energy usage is not going down.

milicent_bystandr@lemm.ee on 01 Jun 2024 09:17 collapse

That said, the LLM isn’t running an array of bonus functions like breathing and wondering why you said that stupid thing to your Aunt’s cousin 15 years ago and keeping tabs on your ambient noise for possible phone calls from that nice boy who promised to call you back.

muntedcrocodile@lemm.ee on 01 Jun 2024 07:24 next collapse

We invented multi bit models so we could get more accuracy since neural networks are based off human brains which are 1 bit models themselves. A 2 bit neuron is 4 times as capable as a 1 bit neuron but only double the size and power requirements. This whole thing sounds like bs to me. But then again maybe complexity is more efficient than per unit capability since thats the tradeoff.

Wappen@lemmy.world on 01 Jun 2024 08:17 next collapse

Human brains aren’t 1 bit models. Far from it actually, I am not an expert though but I know that neurons in the brain encode different signal strengths in their firing frequency.

muntedcrocodile@lemm.ee on 01 Jun 2024 11:17 collapse

Firing of on and off.

SaltySalamander@fedia.io on 01 Jun 2024 11:55 next collapse

Human brains aren't digital. They're very analog.

muntedcrocodile@lemm.ee on 01 Jun 2024 12:33 collapse

Neuronal firing is often understood as a fundamentally binary process, because a neuron either fires an action potential or it does not. This is often referred to as the “all-or-none” principle. Once the membrane potential of a neuron reaches a certain threshold, an action potential will fire. If this threshold is not reached, it won’t fire. There’s no such thing as a “partial” action potential; it’s a binary, all-or-none process.

Frequency Modulation: Even though an individual neuron’s action potential can be considered binary, neurons encode the intensity of the stimulation in the frequency of action potentials. A stronger stimulus causes the neuron to fire action potentials more rapidly. Again binary in nature not analog.

floofloof@lemmy.ca on 01 Jun 2024 13:08 next collapse

Neuronal firing is often understood as a fundamentally binary process, because a neuron either fires an action potential or it does not. This is often referred to as the “all-or-none” principle.

Isn’t this true of standard multi-bit neural networks too? This seems to be what a nonlinear activation function achieves: translating the input values into an all-or-nothing activation.

The characteristic of a 1-bit model is not that its activations are recorded in a single but but that its weights are. There are no gradations of connection weights: they are just on or off. As far as I know, that’s different from both standard neural nets and from how the brain works.

ItsMeForRealNow@lemmy.world on 01 Jun 2024 13:14 next collapse

So what you are saying is they are discrete in time and pulse modulated. Which can encode for so much more information than how NNs work on a processor.

[deleted] on 01 Jun 2024 17:04 collapse

.

conciselyverbose@sh.itjust.works on 02 Jun 2024 00:57 collapse

We really don’t know jack shit, but we know more than enough to know fire rate is hugely important.

echodot@feddit.uk on 01 Jun 2024 12:36 next collapse

Human brains aren’t binary. They send signals in lot of various strength. So “on” has a lot of possible values. The part of the brain that controls emotions considers low but non zero level of activation to be happy and high level of activation to be angry.

It’s not simple at all.

Frost752@lemmy.world on 01 Jun 2024 18:00 next collapse
buzz86us@lemmy.world on 01 Jun 2024 18:37 next collapse

We need to scale fusion

kromem@lemmy.world on 01 Jun 2024 21:52 next collapse

The network architecture seems to create a virtualized hyperdimensional network on top of the actual network nodes, so the node precision really doesn’t matter much as long as quantization occurs in pretraining.

If it’s post-training, it’s degrading the precision of the already encoded network, which is sometimes acceptable but always lossy. But being done at the pretrained layer it actually seems to be a net improvement over higher precision weights even if you throw efficiency concerns out the window.

You can see this in the perplexity graphs in the BitNet-1.58 paper.

lunar17@lemmy.world on 02 Jun 2024 06:21 collapse

None of those words are in the bible

kromem@lemmy.world on 02 Jun 11:09 collapse

No, but some alarmingly similar ideas are in the heretical stuff actually.

Miaou@jlai.lu on 02 Jun 13:52 collapse

Multi bits models exist because thats how computers work, but there’s been a lot of work to use e.g. fixed point over floating for things like FPGAs, or with shorter integer types, and often results are more than good enough.

Knock_Knock_Lemmy_In@lemmy.world on 02 Jun 08:47 collapse

Smaller and speedier means larger token windows and greater variety of models.

Not less energy.