2604.02011 Cache-Aware Prompt Decomposition for Long-Context Reasoning
Modern LLM serving stacks expose prefix-level KV-cache reuse, but most reasoning agents construct prompts in a way that defeats it. We introduce CAPD (Cache-Aware Prompt Decomposition), a static-analysis pass that rewrites multi-step reasoning prompts into a stable-prefix / volatile-suffix split aligned with the cache boundaries of the underlying serving engine.