The Tessl Registry now has security scores, powered by SnykLearn more
Logo
Back to articlesAnthropic tests ‘auto dream’ to clean up Claude Code's memory

26 Mar 20266 minute read

Paul Sawers

Freelance tech writer at Tessl, former TechCrunch senior writer covering startups and open source

Anthropic is quietly working on a new feature for Claude Code designed to clean up how the system stores and uses memory over time.

Over repeated sessions, stored context can accumulate stale details, contradictions, and noise that end up confusing the model. And so Anthropic is aiming to address this with “auto dream,” a feature that periodically reviews and rewrites what Claude remembers.

Claude Code has always had a form of persistent memory in the shape of CLAUDE.md, the instruction files users write to carry project rules and preferences across sessions. What Anthropic added in February was a feature dubbed “auto memory,” which lets Claude save its own notes as it works — from build commands and debugging insights to architecture decisions and code style preferences.

Those notes are stored in a separate memory file, typically surfaced as MEMORY.md, which Claude updates over time and reloads at the start of each session. Thariq Shihipar, member of technical staff at Anthropic, described this as a split between user-defined instructions and model-generated memory.

“You can now think of Claude.MD as your instructions to Claude, and Memory.MD as Claude's memory scratchpad it updates,” Shihipar said. “If you ask Claude to remember something it will write it there.”

Auto memory in Claude Code
Auto memory in Claude Code

However, as those notes accumulate, the system can become harder to manage, with outdated or conflicting information building up. And that, it seems, is why Anthropic is working on a way to keep that memory in check.

Claude dares to dream

Auto dream isn’t yet an official feature within Claude Code. References have surfaced in community reporting around recent Claude Code versions, including v2.1.83, with users pointing to a toggle inside the /memory interface. While the toggle suggests the feature is present in some form, it cannot yet be invoked directly — there is no working /dream command. Instead, reports indicate it runs automatically only when certain conditions are met, such as after a period of activity across multiple sessions.

Based on early community analysis, Auto-Dream appears to run as a background process that periodically scans and reorganizes Claude’s stored notes.

The process involves reviewing existing memory files, identifying what is still relevant, merging overlapping entries, and removing outdated or conflicting information. It also standardizes details — such as replacing relative timestamps with fixed dates — and rebuilds the index so it remains usable.

Importantly, the system is constrained in scope. It doesn’t modify the underlying codebase, and only operates on memory files. It also appears to run infrequently, triggering only after certain thresholds of usage and elapsed time are met.

It’s worth noting that auto dream sits alongside a broader set of techniques used to manage long-running AI agents.

One common approach is context compression, where earlier interactions are summarized or reduced so they can fit within a model’s limited working memory. Related ideas, often referred to as compaction in coding agents, take a similar approach by trimming or condensing prior context during a session to keep the model on track.

Auto dream addresses a different layer of that same problem by reorganizing and cleaning up memory between sessions, aiming to keep what the system retains usable over longer periods.

Auto dream borrows from the human brain

Developers who have examined the system describe it as addressing a real limitation. John Rice, a software engineer and co-founder of Peekaboo, noted that memory systems can become “bloated with noise, contradictions, and stale context,” eventually degrading performance if left unchecked.

Rice also likened the process to how the human brain consolidates information during sleep. Matthew Nour, a principal scientist at Microsoft AI, echoed that comparison pointing to ideas from neuroscience, including “systems consolidation” and “offline generative replay.”

“Exciting to see how neurobiology can continue to inspire AI memory architectures,” Nour wrote.

Kevin Thomas, senior AI product manager at IBM, riffed on similar analogies to the human brain, describing a system that periodically reviews stored memory, removes outdated or conflicting information, and reorganizes what remains into a more coherent structure.

“That's literally what your brain does during REM sleep,” Thomas wrote. “Raw inputs during the day, consolidation at night. Strengthen what matters, discard what doesn't.”

What emerges is a different way of thinking about memory in AI systems — something that requires ongoing upkeep and revision as it grows.

As agents are used across more sessions and over longer periods, the quality of what they remember is just as important as how much they can store. Left unmanaged, that memory can become inconsistent, contradict itself, or lose relevance over time.

Auto dream suggests an attempt to address that directly, by introducing a system that reviews and restructures memory in the background.