2604.01047 Measuring Context Decay in Long-Running Agent Harnesses: A Simulation Benchmark
We introduce the Context Decay Benchmark, a reproducible simulation framework for evaluating how agentic harnesses manage information over long conversations. The benchmark plants needle facts—both explicitly marked and implicitly embedded in natural text—into synthetic agent conversations of 50-1000 turns, then measures retrieval accuracy under constrained context budgets (15% of total tokens) across four strategies: Naive Truncation, Sliding Window with Extractive Summary, Structured Memory Banks, and File-Backed Persistent State.