Skills on Tessl: a developer-grade package manager for agent skillsLearn more
Logo

"Agent Therapy"

for Codebases

Sean Roberts
VP of Applied AI, Netlify
Back to podcasts

Agent Experience Is the New Developer Experience

with Sean Roberts

Transcript

Chapters

Trailer
[00:00:00]
Introduction
[00:01:13]
Understanding Developer and Agent Experience
[00:03:40]
Challenges and Solutions in Agent Experience
[00:07:43]
Building a Better Agent Experience
[00:09:40]
Conclusion and Final Thoughts
[00:27:55]

In this episode

In this episode of AI Native Dev, host Simon Maple speaks with Sean Roberts, VP of Applied AI at Netlify, about the emerging field of Agent Experience (AX) and its significance as the next evolution of Developer Experience (DX). They explore how developers can enhance their workflows by designing codebases to support AI agents as first-class users, emphasizing the importance of standardizing toolchains, making context explicit, and maintaining a continuous feedback loop for improvement. Sean highlights that AX is not about replacing developers but empowering them through strategic tool and process evolution.

Agents are now first-class users of your codebase. In this AI Native Dev episode, host Simon Maple talks with Sean Roberts, VP of Applied AI at Netlify, about why Agent Experience (AX) is the next evolution of Developer Experience (DX). Sean argues that developers increasingly delegate work to AI agents, so teams must intentionally design for an “agent user” the same way they design for humans—without losing sight that the end beneficiary is still the human developer.

From Developer Experience to Agent Experience

Sean frames Agent Experience as an extension of Developer Experience: if developers are building on your platform or codebase, you already have a DX—good or bad. Now that agents are actively participating in development workflows, you also have an AX—good or bad. The strategic question is no longer “are agents touching our code?” but “are we supporting the agents our developers rely on?”

Crucially, AX is about empowering developers, not replacing them. Think of the developer as delegating a chunk of the build to an agent. The quality of the result depends on the connective tissue between the human’s intent and the agent’s ability to operate on your system. That “connection” is AX: everything the agent needs to perform well, from documentation to clear APIs and explicit architectural maps.

Sean emphasises AX is a discipline, not a single product switch. It’s tempting to “check the box” by adding an MCP server or wiring up a protocol, but protocols like MCP or MCPA are just plumbing. What matters is the holistic experience—how easily an agent can understand your codebase, discover dependencies, pick the right patterns, and execute tasks without guessing.

From Sprawl to Symphony: Standardizing Your AI Toolchain

Most teams have drifted into a “one-person band” pattern: every dev picks their own assistant, plugins, or IDE, and the stack sprawls with each model release or tooling trend. One teammate just adopted the latest model; another is deep into a specific spec-driven agent; a third prefers a new AI-native IDE. The result is inconsistent workflows, overlapping costs, and brittle support expectations for your platform.

Sean suggests moving your org from a loose jam session to an orchestra: purpose-built tools intentionally composing toward a shared outcome. That means actively curating which agents you support, defining the workflows they target, and documenting how they’re supposed to operate on your codebase. A cohesive band can do far more together than any number of solo acts.

This standardization doesn’t mean picking one agent forever; it means having a deliberate, evolving toolchain. Expect a fast cadence (e.g., new releases like Gemini 3), but treat changes like product decisions, not experiments. Pick a small set of supported tools (e.g., a spec-driven agent versus a new app builder like Lovable or Bolt) and define when and how to use them. Make sure your platform is ready to serve those agents with the context, constraints, and structure they need.

Crawl, Walk, Run: How to Start Building AX in Your Org

Crawl: Audit actual usage. Don’t ask if people have “tried” AI—assume they have. Ask which tools they use every day or every other day, what they accomplish with them, and where they get stuck. A simple survey (Google Form, internal form) is fine. The goal is to discover the agent patterns in your org: spec-driven development, app generators, IDE copilots, or autonomous task runners.

Walk: Build an internal community. Convene users regularly to share tips, failures, and successful patterns. Run a “pairing with an agent” working session: pick a real ticket, let two devs solve it with their preferred agent approaches, and compare. This spotlights gaps in your AX (missing docs, unclear scripts, version mismatch) and helps standardise on the patterns that win in practice.

Run: Scope support around reality. If your audit shows heavy usage of spec-driven agents, invest first in machine-readable specs, tests, and integration contracts. If users love app-builder agents like Lovable or Bolt, prioritise scaffolding, project templates, and CI hooks that agents can discover and invoke. Treat this as product management for AX: choose where to be great first, then expand.

Context Is Your API: Files, Steering, and Architecture Maps

Agents fill knowledge gaps with confident guesses. They’ll choose a theme you didn’t intend, wire to the wrong API version, or assume a monolith when you run a decoupled frontend-backend-data stack. The fix is to make context a first-class artifact that agents can reliably consume.

Start by codifying “agent-facing” docs: agents.md, steering files, cloud code or cloud.md files—whatever your toolchain reads. Include non-obvious commands (e.g., how to run tests), expected workflows (e.g., “always run unit tests before opening a PR”), and explicit architectural maps. If your frontend depends on a backend service and that in turn depends on a data layer, be explicit about services, endpoints, auth, and how to navigate repos. Don’t let the agent infer; define.

Maintain context like code. As your team discovers pitfalls—ambiguous naming, flaky scripts, confusing integration paths—patch your agent context files. Treat them as part of your developer platform, with code review and versioning. Combine them with “guardrails in code” where possible: script commands, health checks, and scaffolded templates that anchor agent behavior in executable truth rather than prose alone.

Fix Documentation Drift: Dependencies, Versions, and Registries

Brownfield reality is messy: your model may “know” v1 of a library, your codebase runs v3, and your developers use v2 in several services. Agents stumble when dependency docs, types, and examples are out of sync. Sean calls for better machine-readable dependency metadata and applauds emerging approaches like Tessl’s registry, which aims to host authoritative, agent-consumable docs and types for packages.

Treat dependency documentation as a two-audience problem: builders vs. consumers. Open source repos often conflate maintainer docs (how to build/extend) with integrator docs (how to use/version/upgrade). Agents need the consumer side: precise API signatures, supported versions, migration notes, and usage patterns. If the OS community doesn’t provide it yet, curate an internal registry-of-truth that maps package names to the version your org uses, with canonical import paths, examples, and upgrade guidance.

Finally, remember that MCP/MCPA, IDE extensions, “Context Seven,” and similar tools are implementation details. They can help transport context, but they don’t solve AX by themselves. The durable solution is a discipline: curated toolchains, agent-facing context, dependency registries, and a feedback loop that continually turns observed agent failures into improved guidance and scaffolding.

Key Takeaways

  • Treat agents as users of your platform. AX is DX for machine collaborators. Design for the agent so the developer wins.
  • Standardise the toolchain. Move from one-person AI bands to an orchestrated stack. Pick supported agents and define when/how to use them.
  • Start with an audit and a community. Survey daily usage, run “pair with an agent” sessions, and let real-world friction reveal AX gaps.
  • Make context explicit and executable. Provide agents.md/steering files, non-obvious commands, architecture maps, and scripts/templates that encode the “right way.”
  • Fight dependency drift. Curate a registry-of-truth for versions, types, and examples. Explore registries like Tessl’s and push OS projects to publish agent-friendly docs.
  • AX is a discipline, not a checkbox. MCP servers and new models help, but sustained value comes from continuous iteration on context, documentation, and workflows.