The Memory Problem
How neuroscience is shaping the way researchers think about long-context AI.
In April 2025, three researchers at the Chinese University of Hong Kong published a paper with a premise borrowed from cognitive science: what if a language model’s context window is short-term memory, its parameters are long-term memory, and the real challenge is not expanding the former but learning to transfer knowledge between them? Their framework, InfiniteICL, claimed to reduce context length by 90% while matching or exceeding the performance of full-context prompting, and on tasks stretching to two million tokens, it outperformed full-context approaches while using only 0.4% of the original input.
The paper landed at ACL 2025 Findings and received a polite academic reception, but did not become the basis for any commercial product. Its specific technique, converting context into parameter updates through a three-step pipeline of knowledge elicitation, selection, and consolidation, has not been widely adopted. Yet almost a year later, the problem it identified sits at the center of a debate that has only grown louder: whether the industry’s approach to long context is solving the right problem at all.
The Problem of Scale
By now, the context-window arms race has produced numbers that would have seemed absurd three years ago. Google’s Gemini models offer up to two million tokens, Meta’s Llama 4 advertises ten million, OpenAI’s GPT-5.2 pairs 400,000 tokens with a compaction endpoint that extends effective reach further, and Anthropic’s Claude Sonnet 4 now supports one million in beta for higher-tier users. The trajectory of these numbers is obvious, and the marketing departments behind them would like you to believe the problem is essentially solved.
The research tells a different story. The “lost in the middle” phenomenon, first documented by Liu et al. at Stanford in 2023, showed that models perform well on information placed at the beginning or end of their context but fail on information buried in the middle, producing a distinctive U-shaped accuracy curve. That finding has proven durable. The NoLiMa benchmark found that at 32,000 tokens, 11 of 12 tested models dropped below 50% of their short-context performance. Extending context length alone, even when the model has access to every token, degrades reasoning capability. The tokens are accessible, but they are not usable.
This is the gap identified by InfiniteICL, even if its particular solution has not become the standard. The paper’s core insight, that the bottleneck is not how much a model can see but how well it can consolidate what it sees into durable knowledge, has aged better than the framework itself. The technique works by using the model to generate question-answer pairs from context segments, filtering these through a selection process, and then fine-tuning the model’s parameters on the selected pairs using LoRA adapters. The context is effectively “studied” rather than merely held in view. At extreme compression ratios where conventional methods collapse to baseline performance, InfiniteICL maintained over 62% of full-context accuracy, a result that pointed toward the underlying principle: the relationship between a model and its context is more like learning than it is like reading.
Consolidation Over Capacity
That principle has since generated a small but growing body of work. Temp-LoRA, proposed by Wang et al. in 2024, takes a similar approach by training temporary LoRA modules on context augmented with synthetic QA pairs. NVIDIA’s test-time training research, published in January 2026, compresses context into model weights through next-token prediction and achieves constant inference latency regardless of context length, a 35x speedup over full attention at two million tokens on an H100. The NVIDIA team wrote, with some confidence, that “the research community might finally arrive at a basic solution to long context in 2026.” Sakana AI’s Doc-to-LoRA, announced in February, learns to instantly internalize documents into temporary LoRA adapters, treating context storage as a parameter update rather than a memory problem.
The most interesting recent contribution may be Latent Context Compilation, published at the end of January 2026. Where InfiniteICL and Temp-LoRA bake context into model weights (creating what the authors call “stateful parameters” that complicate concurrent serving), Latent Context Compilation uses a disposable LoRA module as a compiler to distill long contexts into compact “buffer tokens,” stateless, portable memory artifacts that work with frozen base models. The distinction matters for production systems. A model whose weights change with every new context is difficult to serve to multiple users simultaneously; a set of portable buffer tokens that can be swapped in and out is a different engineering proposition entirely. At a 16x compression ratio, the method preserved fine-grained details where InfiniteICL and Temp-LoRA showed catastrophic forgetting of general capabilities.
Meanwhile, the industry’s production answer has been more pragmatic than any of these research directions. OpenAI’s compaction feature, first introduced with GPT-5.1-Codex-Max and refined in GPT-5.2, performs server-side context compression that returns encrypted, opaque tokens carrying forward prior state. The feature was designed for agentic coding workflows where sessions can run for hours, repeatedly hitting context limits. It is, in effect, a proprietary black-box implementation of the same principle: when context gets too long, compress it into something smaller that the model can work with going forward. Microsoft’s LLMLingua family, which achieves up to 20x compression with minimal performance loss through token-level perplexity scoring, represents the extractive counterpart, pruning low-information tokens rather than consolidating knowledge. The HyCo2 framework, submitted to ICLR 2026, attempts to bridge the two approaches by combining hard token selection with soft latent compression, matching uncompressed performance while reducing token consumption by 88.8%.
Where This Leaves AI
The context window as currently conceived has a scaling problem that cannot be fixed by making it larger. The KV cache for a 500-billion-parameter model processing just 20,000 tokens requires roughly 126 gigabytes of memory; scale that to a million tokens and you are talking about infrastructure costs that make the “just stuff everything in the context” approach economically unsustainable for most applications. Even when the memory is available, the attention mechanism distributes focus across all tokens, so each additional piece of information receives proportionally less attention, a fundamental architectural constraint that no amount of context-window expansion addresses.
The InfiniteICL paper’s cognitive-science framing, short-term versus long-term memory, turns out to be a useful lens precisely because it names a distinction the industry has been learning through expensive trial and error. Humans do not remember by holding everything in working memory simultaneously; they consolidate, compress, and retrieve. The question now is which consolidation strategy will prove most practical: baking knowledge into weights (InfiniteICL, TTT), compiling it into portable tokens (Latent Context Compilation), pruning it extractively (LLMLingua), or compacting it through proprietary inference-time mechanisms (OpenAI). Each approach involves a different set of tradeoffs around latency, portability, fidelity, and engineering complexity, and none has yet established itself as the definitive answer.
The context window will continue to grow, because the marketing incentives are strong and the benchmarks reward it. But the most productive research in long-context AI has refocused on the question of how efficiently a model can learn from its context, rather than how much context it can handle. InfiniteICL has not yet changed the industry, but it articulated the context problem clearly enough that every major lab is now working on their own solution to it.



Even with Claude level context, using more like local models helps. Restate, restart often. That mixed with chat and axiom restatements gets further than a single mega chat.
Context-based question & answer pairs is how this person maintains mental bandwidth. It is magic to others, who happily overrun their buffers and throw up their hands.
Life is a lot more like those irritating test questions that we care to admit. The ones with unnecessary details included that point to wrong answers.