Catch up with DevCon Fall conference talks in our YouTube Playlist
Logo
Back to articlesAnthropic open-sources its internal code-simplifier agent

19 Jan 20266 minute read

Paul Sawers

Freelance tech writer at Tessl, former TechCrunch senior writer covering startups and open source

Cleaning up code after a long AI-assisted build is becoming a problem in its own right. As developers lean more heavily on coding agents, refactoring and simplification are increasingly handled by AI too — raising new questions about cost, context, and efficiency.

With that in mind, Anthropic has open-sourced a code-simplifier agent used internally by the Claude Code team. Claude Code, for the uninitiated, is Anthropic’s command-line coding environment that allows developers to plan, write, and modify software by interacting with Claude directly from the terminal.

Code-simplifier is designed to be invoked at the end of long coding sessions or before merging complex pull requests. It focuses on rewriting existing code to reduce duplication, clarify logic, and make projects easier to maintain.

In its purest form, Code-simplifier can perhaps be best described as a packaged, reusable prompt with guardrails — a fixed set of instructions that constrain how the model refactors code, such as prioritising readability over cleverness, avoiding unnecessary abstraction, and preserving existing behaviour rather than introducing new logic.

In practice, the agent runs as a discrete step inside Claude Code, where developers can invoke it to simplify pull request changes before release.

Ask Claude to use the code simplifier agent at the end of a long coding session
Ask Claude to use the code simplifier agent

The agent is available as a plugin inside Anthropic’s public `claude-plugins-official` repository. While the underlying Claude model remains proprietary, the agent’s instructions and behavior are fully visible and modifiable — giving developers a look at how Anthropic structures internal tooling around code cleanup.

Community feedback centers on token usage

Early reaction skewed somewhat skeptical, with most discussion focused on token costs and whether the agent offers meaningful benefits beyond a normal well-written prompt.

Indeed, running a simplification agent requires rereading and rewriting code that may already have consumed a large number of tokens to generate. For developers on fixed token budgets, that can feel like paying twice — once to write the code, and again to clean it up.

That concern showed up repeatedly in community reactions, including one widely upvoted Reddit comment where a user jokingly describes asking Claude Code to simplify their program. They wait several minutes for the agent to complete the task, only to find that the rewritten code no longer worked — followed immediately by a token-limit warning that cut the session short.

Reddit community reacts
Reddit community reacts

The fictional exchange is shorthand for a broader frustration: cleanup passes that are both costly and inconclusive, leaving developers with broken code and no remaining budget to fix it.

However, the skepticism extended beyond cost to the substance of the release itself. Several commenters argued that Code-simplifier amounted to little more than a shared prompt, questioning whether its “open source” framing matched the scope of what was actually being released.

In one exchange, users pointed out that the instructions had already been visible in Anthropic’s plugin repository for months, while others mocked the announcement as little more than publishing a handful of paragraphs describing how the model should behave.

"We're open-sourcing prompts now..."
"We're open-sourcing prompts now..."

Simplification through better context

Rather than relying on additional agents to rewrite code after it has been generated, some teams are focusing on context itself as the lever for simplification. The idea is that better-scoped, more precise context — such as explicitly supplying information about existing open-source libraries or internal frameworks — can help agents reuse established code instead of generating custom logic, reducing both complexity and the token cost of cleaning it up later.

That framing reflects questions raised in the community following Anthropic’s release: if code routinely needs a dedicated “clean and simplify” pass, is something missing earlier in the process? Instead of treating simplification as a final step, the argument goes, better instructions and context upfront could prevent much of that complexity from being introduced at all.

Tessl builds on that idea by focusing on how context is supplied to agents before code is written, developing a platform that lets teams provide structured, curated context — including specifications and library knowledge — so agents have a clearer understanding of what already exists and how it should be used.

Seen that way, Anthropic’s code-simplifier and platforms like Tessl address different stages of the same problem. One helps clean up complexity after it accumulates; the other aims to prevent unnecessary complexity from being written in the first place.

As coding agents take on more of the software lifecycle, the tradeoff is becoming clearer. Tools that focus on rewriting can help once complexity has already set in, but approaches that shape what agents generate in the first place may prove more effective — particularly as teams start to feel the cost of repeated passes over the same code.

Join Our Newsletter

Be the first to hear about events, news and product updates from AI Native Dev.