The Tessl Registry now has security scores, powered by SnykLearn more
Logo
Back to articlesVercel open-sources Open Agents to help companies build their own AI coding agents

15 Apr 20266 minute read

Paul Sawers

Freelance tech writer at Tessl, former TechCrunch senior writer covering startups and open source

AI coding tools work well enough in isolation, but drop them into a large codebase, and the cracks start to show.

That gap, between off-the-shelf coding agents and real-world codebases, is why some companies are starting to build their own coding agents, a shift that requires their own infrastructure to run and manage them.

And this is where cloud platform provider Vercel is setting out its stall. The company this week announced that it has open-sourced Open Agents, a reference platform for building and running cloud-based coding agents. So rather than releasing yet another assistant, Open Agents lays out the pieces needed to construct one — from the agent runtime and long-running workflows to sandboxed execution and model routing.

Open Agents dashboard
Open Agents dashboard

Building beyond off-the-shelf agents

Generic coding agents often struggle when dropped into large monorepos, failing to fully reflect the internal knowledge, integrations, or processes that define how a company actually builds software.

A growing number of companies are already moving to address that. Teams at Stripe, Ramp, Spotify, and Block have been building their own internal coding systems tailored to their own codebases and workflows, often releasing them to the broader public under an open source license.

For Vercel CEO Guillermo Rauch, the reasons companies are moving in this direction are both technical and strategic.

“On a technical level, off-the-shelf coding agents don't perform well with huge monorepos, don't have your institutional knowledge, integrations, and custom workflows,” Rauch explained on LinkedIn.

“On a business level, the moat of software companies will shift from 'the code they wrote', to the 'means of production' of that code. The alpha is in your factory.”

Open Agents is positioned as a way to support that approach. The project is designed to be forked and adapted, giving teams a starting point for building systems that fit their own codebases and tooling, rather than bending those environments to suit off-the-shelf coding agents.

Agents run continuously in the cloud, handling multi-step tasks that persist over time rather than finishing in a single interaction. Code execution is handled inside isolated sandboxes, while the agent itself operates outside that environment, interacting with it through defined tools.

Welcome to the “software factory”

Rauch argues that the competitive edge in software is shifting from the code itself to the systems that produce it — what he describes as the “software factory”. As more of the work is handled by agents, the value moves toward how those agents are orchestrated: how tasks are broken down, how context is managed, and how code moves from prompt to production.

That is where Open Agents fits – it’s all about defining the environment in which that work happens.

Open Agents is structured as a three-layer system — a web interface, a long-running agent workflow, and a sandboxed execution environment — designed to take a task from prompt through to actual code changes in a repository. The agent itself runs outside the sandbox, handling reasoning and orchestration, while the sandbox runs the code — managing files, shell commands, and git operations — allowing execution, state, and infrastructure to be managed independently.

Agent interacting with files, shell, and repo in real time
Agent interacting with files, shell, and repo in real time

Control versus convenience

As AI coding agents are used more heavily, questions around cost, limits, and control become harder to ignore. Building on top of external platforms means working within their constraints, whether that is pricing, rate limits, or how features are exposed.

Open Agents promises a different route. It gives companies a way to run their own agent systems, with more control over how tasks are executed and how models are used. That control still comes with responsibility — while Vercel provides the underlying infrastructure, teams still need to define how these systems behave, integrate them with their own tooling, and maintain them over time. Running a setup like this is not trivial, and for many teams, off-the-shelf coding agents and managed platforms will remain the easier option.

Anthropic, for example, has gone the opposite direction entirely — its Claude Managed Agents service takes on execution, orchestration, and state on behalf of developers, turning what would otherwise be months of infrastructure work into a hosted platform.

Both Anthropic and Vercel are, in different ways, making the same argument: that the execution layer around an agent matters as much as the model powering it. State, durability, sandboxing, orchestration — these were once engineering footnotes. They are now the product.

That leaves companies with a choice. They can build and run these systems themselves, shaping them around their own code and workflows, or rely on providers to handle that layer for them. Open Agents doesn’t resolve that tension, but it makes one side of that choice easier to pursue.