DeepSeek releases DeepSeek OCR (huggingface.co)
from yogthos@lemmy.ml to technology@lemmy.ml on 20 Oct 12:23
https://lemmy.ml/post/37805263

LLMs totally choke on long context because of that O(n2) scaling nightmare. It’s the core scaling problem for almost all modern LLMs because of their self-attention mechanism.

In simple terms, for every single token in the input, the attention mechanism has to look at and calculate a score against every other single token in that same input.

So, if you have a sequence with n tokens, the first token compares itself to all n tokens. The second token also compares itself to all n tokens… and so on. This means you end up doing n*n, or n^2, calculations.

This is a nightmare because the cost doesn’t grow nicely. If you double your context length, you’re not doing 2x the work; you’re doing 2^2=4x the work. If you 10x the context, you’re doing 10^2=100x the work. This explodes the amount of computation and, more importantly, the GPU memory needed to store all those scores. This is the fundamental bottleneck that stops you from just feeding a whole book into a model.

Well, DeepSeek came up with a novel solution to just stop feeding the model text tokens. Instead, you render the text as an image and feed the model the picture. It sounds wild, but the whole point is that a huge wall of text can be “optically compressed” into way, way fewer vision tokens.

To do this, they built a new thing called DeepEncoder. It’s a clever stack that uses a SAM-base for local perception, then a 16x convolutional compressor to just crush the token count, and then a CLIP model to get the global meaning. This whole pipeline means it can handle high-res images without the GPU just melting from memory activation.

And the results are pretty insane. At a 10x compression ratio, the model can look at the image and “decompress” the original text with about 97% precision. It still gets 60% accuracy even at a crazy 20x compression. As a bonus, this thing is now a SOTA OCR model. It beats other models like MinerU2.0 while using fewer than 800 tokens when the other guy needs almost 7,000. It can also parse charts into HTML, read chemical formulas, and understands like 100 languages.

The real kicker is what this means for the future. The authors are basically proposing this as an LLM forgetting mechanism. You could have a super long chat where the recent messages are crystal clear, but older messages get rendered into blurrier, lower-token images. It’s a path to unlimited context by letting the model’s memory fade, just like a human’s.

#technology

threaded - newest

null@piefed.nullspace.lol on 20 Oct 13:44 next collapse

That’s a really clever, interesting solution. DeepSeek seems to be leaning hard into optimization and efficiency where other AI companies are just throwing more money at more hardware.

yogthos@lemmy.ml on 20 Oct 14:05 collapse

Yeah, it’s refreshing to see the focus being on actual efficiency.

fruitycoder@sh.itjust.works on 21 Oct 06:39 collapse

Honestly effenciy, more hardware options and federated are the only real AI advancements that makes huge waves to me. Like getting a reasonable output out of a hundreds of billion dollar DC just isn’t as cool as: your PC, phone, toaster can now do xyz or contribute to xyz

HiddenLayer555@lemmy.ml on 20 Oct 20:24 collapse

Interesting. We’re coming full circle to text being read as images. Like all text was before computers.

Enceladus@lemmy.ca on 20 Oct 20:40 next collapse

Does it add Egyptian hieroglyphes to CS major or should I just learn Babylonian akkadian like every new freshman?

yogthos@lemmy.ml on 20 Oct 20:41 collapse

turns out a picture is really worth a thousand words :)