Reuse non-prefix KV Cache and speed up RAG by 3X with LMCache. (dl.acm.org)
from yogthos@lemmy.ml to technology@lemmy.ml on 01 Jul 23:46
https://lemmy.ml/post/32562579

In modern LLM applications like RAG and Agents, the model is constantly fed new context. For example, in RAG, we retrieve relevant documents and stuff them into the prompt.

The issue is that this dynamically retrieved context doesn’t always appear at the beginning of the input sequence. Traditional KV caching only reuses a “common prefix,” so if the new information isn’t at the very start, the cache hit rate plummets, and your GPU ends up recomputing the same things over and over.

CacheBlend changes the game by allowing for the reuse of pre-computed KV caches regardless of their position in the input sequence.

This makes it possible to achieve a 100% KV Cache hit rate in applications like RAG. The performance gains are significant:

CacheBlend works by intelligently handling the two main challenges of reusing non-prefix caches:

An interactive CacheBlend demo is available at: github.com/LMCache/…/demo-rag-blending

#technology

threaded - newest