The Tessl Registry now has security scores, powered by SnykLearn more
Logo
Back to podcasts

Everything 100 Episodes Revealed About AI Native Dev

with Guy Podjarny

Transcript

Chapters

Trailer
[00:00:00]
AI Native DevCon
[00:01:12]
100 episodes in, how did we get here
[00:02:05]
Former GitHub CEO's prediction
[00:03:17]
Revisiting Guy's spec centric prediction
[00:04:06]
Why spec centric quietly became context centric
[00:05:23]
Stop prompting agents, start onboarding them
[00:07:03]
Former GitHub CEO: you can't review what AI writes
[00:09:02]
The human is the bottleneck everywhere now
[00:10:15]
Inside the context development lifecycle
[00:12:37]
What the cloud era already taught us
[00:15:18]
Annie Vella: the skill devs need to let go of
[00:17:20]
Do you love the craft or the creation
[00:18:28]
The best devs were never really coders
[00:21:04]
Armon Dadgar: context is everything in DevOps
[00:24:31]
Why root cause analysis is the killer use case
[00:26:10]
Can agents really write enterprise Terraform
[00:29:04]
Ian Thomas: 80% of Meta engineers use AI weekly
[00:31:45]
The AI haves and the AI have nots
[00:33:42]
Forget token spend, track merged PRs
[00:35:59]
Birgitta Böckeler: AI amplifies everything, good and bad
[00:38:01]
Garbage in, garbage out, now at agent speed
[00:39:24]
Olivier Pomel: the dream of never waking at 3am
[00:42:27]
The nightmare: hackers went agentic
[00:44:55]
Why error tolerance beats human in the loop
[00:48:00]
Mati Staniszewski: voice is the future of AI
[00:50:35]
Claude Code, sub agents, and rambling at your PM
[00:52:02]
Walled gardens vs the open AI web
[00:53:33]
Thanks for listening
[00:55:02]

In this episode

When did writing code stop being the job and start being the hobby?


One hundred episodes in, Guy Podjarny and Simon Maple pull the clips, check the predictions, and trace the through line across conversations with guests from Datadog, ElevenLabs, GitHub, and more.


They get into:

  • The move from spec-driven to context-driven development
  • Why humans become the bottleneck in code review
  • What changes when agents run the SDLC end-to-end
  • Adoption across orgs vs depth of actual usage

Thanks to every guest and every listener who made this possible. On to the next hundred.


Want to have these conversations in person? AI DevCon is coming to London on 1st and 2nd June, 2026.

From Spec-Centric to Context Engineering: Lessons from 100 Episodes

The pace of AI development makes predictions precarious. In episode one of the AI Native Dev podcast, Guy Podjarny predicted a shift from code-centric to spec-centric development, where developers would specify what they need and AI would provide the implementation. Looking back after 100 episodes, that prediction was partially right and partially, as former GitHub CEO Thomas Dohmke would say, fundamentally wrong.

The milestone episode brought together clips from guests across the first 100 conversations, with Guy and Simon Maple reflecting on how their thinking has evolved. What emerged was a consistent theme: the industry has moved beyond simply telling AI what to build and toward training AI how to build.

From Specs to Context: The Evolution

The original spec-centric vision imagined capturing intent in natural language specifications that AI would implement. That part holds up. What the early framing missed was how narrow that focus was compared to what great developers actually do.

"When you think about your dev team and what you expect of them, you don't really say, hey, make sure that every time you read this doc and follow the exact instructions," Guy reflected. "Generally you want them to make good decisions, including choosing when to update documents and read them."

A great developer in any organization brings judgment about quality versus speed tradeoffs, collaboration patterns, testing approaches, and infrastructure choices. Spec-driven development addresses a subset of those concerns. The fuller picture involves what Guy now calls "speaking the programmer" rather than "speaking the program," essentially training AI agents to behave like the developers you would want on your team.

The unit of software has shifted accordingly. Where the early conversation focused on specs, the industry now discusses skills, broader instructions about how to develop rather than just what to develop. Context engineering has become the discipline of providing AI agents with everything they need: not just task specifications but organizational standards, architectural patterns, and decision-making frameworks.

The Human Bottleneck Problem

Thomas Dohmke articulated what many guests confirmed: humans cannot review the volume of code that agents can produce. Running ten agents in parallel around the clock generates more code than any person can meaningfully evaluate without becoming the bottleneck that erases productivity gains.

The response pattern that emerged across episodes was not to simply skip review but to automate the end-to-end software development lifecycle. Code review is the immediate pressure point. But as Guy noted, once code flows through review, the next constraint becomes deployment, then observability, then incident response. Throughout the entire cycle, human involvement at the same intensity as before breaks down.

"Our aspiration really has to be to identify every single step of that process and automate it," Guy observed. The human role shifts from frontline execution to something closer to management: defining what correct looks like, conveying instructions, identifying mistakes, and resolving them.

This maps to the context development lifecycle (https://claude.ai/blog/context-engineering-guide): generating context about desired behavior, testing and evaluating whether instructions are followed, distributing that context to agents, observing what happens, and learning from the results to update instructions. Humans operate in that lifecycle while agents operate in the software development lifecycle itself.

What Makes a Great Developer Now

Annie Vella from Westpac surfaced an uncomfortable truth: the skill of solving complex coding problems quickly, the skill celebrated in technical interviews and leaderboards, matters less when AI can generate good-enough solutions faster than humans can type.

"That is a skill I think we're going to need to learn to let go of, both in interviews and personally," she noted.

Guy distinguished between developers motivated by the craft of coding and those motivated by creation and impact. For the creation-oriented, agents are tools that shorten the journey from idea to execution. For the craft-oriented, there is an adjustment to make.

The best developers, even before AI, were those who understood the problem deeply and saw code writing as translation from understanding to implementation. That profile remains valuable. What changes is the speed of iteration and the loss of thinking time that slower coding provided.

"As coding becomes something that's very fast and actually taken away from us slightly, we do lose that time to think as we're building," Simon observed. "But what we do gain is that faster iteration which allows us to gain feedback faster and throw away quicker."

The emerging pattern involves making architectural decisions explicit, having agents document their reasoning, and being willing to discard and rebuild rather than incrementally modify. Chad Fowler's recent episode on regenerative software and Phoenix architecture directly addresses this shift toward disposable, rebuildable code.

DevOps Context and Production Reality

Armon Dadgar from HashiCorp emphasized early that context differentiates great AI from useless AI. You might hire the world's best SRE, but they need to know whether your organization runs Windows or Red Hat in production before they can contribute meaningfully. The same applies to AI agents.

The DevOps world has embraced context engineering for root cause analysis, where sifting through massive amounts of observability data plays to AI strengths and where errors waste time rather than destroy systems. Production deployment and modification remain more cautious. The blast radius of dropping a database in production creates appropriate hesitation.

Mirko from Dash0 and Olivier Pomel from Datadog both emphasized context as central to troubleshooting and automation. The dream Olivier described, never waking up at 3 a.m. to fix an issue because the system handles it automatically, requires precision in root cause analysis that was science fiction four years ago but now seems within reach.

The flip side, which multiple guests addressed, involves security risks accelerating alongside capabilities. Attackers use the same AI capabilities to find vulnerabilities, craft phishing attempts, and exploit supply chain weaknesses. The forcing function pushes toward end-to-end automation not just for efficiency but for security response speed.

Amplifying Good and Bad Equally

Birgitta from ThoughtWorks captured a critical insight: AI amplifies indiscriminately. Good engineering practices get multiplied. Bad ones do too. If your code base contains patterns you would not want replicated, agents will replicate them. If your knowledge base has outdated documentation, AI will learn from it.

This drives the shift away from "learn from our code" approaches toward explicit context engineering. Rather than hoping AI infers correct behavior from existing artifacts, organizations increasingly define what correct behavior looks like and provide that as context.

The observation requirement extends beyond initial setup. Context goes stale. Practices change. Security vulnerabilities emerge. The systems need to be self-healing not just in the DevOps sense of recovering from failures but in the sense of updating their own instructions based on observed outcomes.

Intercom's approach, where a test generator finds bugs and then searches for similar patterns elsewhere in the system, represents this direction: closed-loop learning that updates context based on production reality.

What Comes Next

The retrospective pointed toward several near-term pressure points: automated code review to remove human bottlenecks, security response automation to match attacker speed, and observability-driven context updates to prevent staleness. Each requires extending the end-to-end automation that cloud-era development initiated.

Some things tolerated in the cloud era will not be tolerated in the AI era. Guy drew the parallel to how practices acceptable in waterfall became inadequate for cloud speed. Similarly, practices acceptable for human-paced development will not survive AI-paced development.

The consistent thread across 100 episodes has been treating AI as team members to be trained rather than tools to be wielded. Onboarding, continuous learning, and organizational alignment matter for agents just as they do for human developers. The difference is that agents can scale, which makes getting the training right both more valuable and more urgent.

Worth listening through the full retrospective for the guest clips and additional commentary. And for those keeping score on predictions, Guy's spec-centric framing was directionally correct, just insufficiently broad. The next 100 episodes will reveal which of the current frameworks hold up and which get labeled fundamentally wrong.

Chapters

Trailer
[00:00:00]
AI Native DevCon
[00:01:12]
100 episodes in, how did we get here
[00:02:05]
Former GitHub CEO's prediction
[00:03:17]
Revisiting Guy's spec centric prediction
[00:04:06]
Why spec centric quietly became context centric
[00:05:23]
Stop prompting agents, start onboarding them
[00:07:03]
Former GitHub CEO: you can't review what AI writes
[00:09:02]
The human is the bottleneck everywhere now
[00:10:15]
Inside the context development lifecycle
[00:12:37]
What the cloud era already taught us
[00:15:18]
Annie Vella: the skill devs need to let go of
[00:17:20]
Do you love the craft or the creation
[00:18:28]
The best devs were never really coders
[00:21:04]
Armon Dadgar: context is everything in DevOps
[00:24:31]
Why root cause analysis is the killer use case
[00:26:10]
Can agents really write enterprise Terraform
[00:29:04]
Ian Thomas: 80% of Meta engineers use AI weekly
[00:31:45]
The AI haves and the AI have nots
[00:33:42]
Forget token spend, track merged PRs
[00:35:59]
Birgitta Böckeler: AI amplifies everything, good and bad
[00:38:01]
Garbage in, garbage out, now at agent speed
[00:39:24]
Olivier Pomel: the dream of never waking at 3am
[00:42:27]
The nightmare: hackers went agentic
[00:44:55]
Why error tolerance beats human in the loop
[00:48:00]
Mati Staniszewski: voice is the future of AI
[00:50:35]
Claude Code, sub agents, and rambling at your PM
[00:52:02]
Walled gardens vs the open AI web
[00:53:33]
Thanks for listening
[00:55:02]