Skills on Tessl: a developer-grade package manager for agent skillsLearn more
Logo
Back to podcasts

Why Faster AI Development Often Increases Rework

with Cian Clarke

Transcript

Chapters

Introduction
[00:00:58]
Deep Dive into BMAD Methodology
[00:35:55]
Challenges and Future of AI-Driven Development
[00:08:52]
Challenges and Solutions in Spec-Driven Development
[00:27:30]
Spec-Driven Development in Startups
[00:34:38]
Future of AI in Software Development
[00:42:13]

In this episode

In this episode, host Simon Maple chats with Cian Clarke from NearForm about BMAD—an open-source, spec-driven framework for AI-native software development. Discover how BMAD transforms AI from demo tools to practical solutions by centering development on rich specifications and structured context, leading to more consistent outputs and reduced ambiguity. Learn practical tips for adopting a spec-first approach to ship software that reflects intentional design.

AI-native development grows up in this episode as host Simon Maple sits down with NearForm’s Head of AI, Cian Clarke, to unpack BMAD—Build More Architect Dream—an open-source, spec-driven workflow named after its creator, Brian Madson. The conversation moves from why “vibe coding” with generative models needs guardrails, to how spec-first practices and better “context engineering” can make AI actually ship software, not just demos. Cian walks through NearForm’s adoption journey, what BMAD v6 changes, and practical advice for developers picking the right tool for the job.

BMAD: Spec-Driven AI Development and “Context Engineering on Steroids”

BMAD is an open-source framework for spec-driven software delivery with AI. It’s model-vendor agnostic and IDE-agnostic, which appealed to NearForm given its open-source pedigree (Fastify, Node.js core). Rather than relying on one-shot prompts or loose interactive sessions, BMAD centers the process around rich specifications—artifacts that represent the “what” and the “why” before AI generates the “how.” In Cian’s words, it’s really intelligent context engineering for software projects.

The v6 release is a notable evolution: teams can now select the scale of the project they’re attempting (e.g., prototype vs MVP vs more ambitious builds), and a new module system lets you “bring your own roles” to shape how the model behaves (think architect, tech lead, or QA sensibilities encoded as roles). These capabilities transform BMAD from a clever demo tool into a more comprehensive, team-friendly workflow so you can steer the AI with domain language, constraints, and team norms.

At its core, BMAD treats specs as living, executable context. Instead of letting the model invent missing details, teams make those trade-offs explicit up front. The payoff is more consistent outputs, fewer rewrites, and a codebase that reflects intentional design, not the stochastic creativity of a coding model left to fill in the blanks.

Why Spec-First Now: Vibe Coding’s Ambiguity Tax

Vibe coding is great for quick explorations, but ambiguity in requirements becomes expensive—and fast—when models are generating whole systems. Humans will pause to ask clarifying questions when requirements are fuzzy; models can and will “make something up.” That manifests as rework, wasted tokens, and code smell as ad hoc choices ossify in the repo. What used to be a nuisance in traditional development becomes magnified with AI, because the model acts decisively on incomplete direction.

BMAD (and spec-driven practices generally) attack this ambiguity tax by grounding the model in clear constraints, behaviors, and desired outcomes before code generation begins. It separates the “what” (user outcomes, domain rules, constraints, business levers) from the “how” (tech choices, libraries, patterns), and encodes both for the model in structured docs. This is akin to BDD-era rigor but adapted for generative coding models: not a monolithic prompt, but a set of artifacts the system continuously references.

The result isn’t just better code—it’s reduced uncertainty and better alignment with stakeholders. When the system does need to make novel decisions, those are guided by explicit roles and constraints rather than the model’s guesswork. That keeps the repo cleaner and the iteration loop tighter.

From Discovery to Executable Specs: NearForm’s Workflow

NearForm’s journey to BMAD flowed naturally from its consulting practice. Their Ignite discovery sessions align stakeholders on goals, constraints, and measures of success. NearForm started capturing these conversations with AI note-takers, then distilled them into playback decks, requirement docs, and backlogs. That artifact pipeline became the perfect feedstock for BMAD, turning discussion into the structured context a model can execute against.

This mirrors well-known product practices like Amazon’s PR/FAQ and “squad mobbing”—writing the future press release and FAQ to surface requirements early. With BMAD, those artifacts don’t just live in Notion—they guide the generation of the software itself. The documentation evolves into a control surface for the model, reducing randomness and allowing teams to “tune” behavior via updated specs and role modules.

NearForm also cross-pollinates with other AI-native dev tools. Kiro often sits alongside BMAD as a strong contender for AI-accelerated delivery, and the team explores tools like Tessl for discovery-in-context and agent enablement. The key is picking tools that amplify a spec-driven workflow rather than encouraging ad hoc coding.

Where BMAD Shines—and Where to Use Alternatives

BMAD has become the tool of choice at NearForm for AI-native development of greenfield MVPs and shippable systems. When your goal is a real product—something you’ll keep, maintain, and evolve—BMAD’s upfront structure pays dividends. It’s also the most capable approach Cian’s team has found for brownfield work, though legacy code always introduces friction. Even then, having roles and modules that express your architectural intent can guide the model through complex repos more credibly than freeform prompting.

Historically, BMAD’s process felt heavy for rapid prototyping or throwaway experiments. V6 addresses this with the ability to select a smaller project scale and simplified workflows, but NearForm still reaches for tools like Bolt.new when speed-to-demo matters more than longevity. The pragmatic guidance is simple: if it’s a spike you’ll throw away, go lightweight; if it’s an MVP or production-bound artifact, invest in specs and BMAD.

Practically, adopting BMAD looks like this: run a discovery session (Ignite-style) to align stakeholders and document constraints, distill that into specs and a prioritised backlog, choose your project scale in BMAD, define role modules that reflect your team’s architectural and quality preferences, and let the tool orchestrate model output. Iterate by refining the spec and roles—not by patching random code the model produced.

Developer Empathy as a Design Principle for AI-Native Workflows

Cian frames all of this through developer empathy: what parts of the job did we hate when we were on the keyboard? Think CloudFormation stacks rolling back repeatedly, flaky UI tests exploding because a button shifted ten pixels, or documentation rotting as code shifts. AI should target those friction points first—automating drudge work and eliminating toil—so humans focus on domain logic, system design, and quality.

Spec-driven workflows like BMAD align well with this ethos. They reduce the need for developers to reverse-engineer intent from autogenerated code and instead place intent at the center. Even tasks like documentation become byproducts of the same artifacts steering development. And because BMAD is open source, model-agnostic, and IDE-agnostic, teams keep control over their stack, vendor choices, and code ownership—key concerns for developer trust.

The broader NearForm perspective is to care not just about what you build with AI, but how you build it. Tooling choices (BMAD, Kiro, Bolt.new) should be made in service of predictable delivery, developer happiness, and sustainable codebases. That’s how AI moves from novelty to a dependable part of the software supply chain.

Key Takeaways

  • Specs first, prompts second: Treat specs as living context. Encode the “what” and the “why” up front, and let the model generate the “how.” This avoids hallucinated features, reduces rework, and keeps repos clean.
  • Use the right tool for the job: BMAD for greenfield MVPs and shippable systems; Bolt.new (or similar) for quick throwaway prototypes; Kiro as a complementary option for AI-accelerated delivery. BMAD v6 narrows the prototyping gap but be pragmatic.
  • Lean into discovery artifacts: Ignite-style discovery sessions, AI note-takers, playback decks, PR/FAQs, and squad mobbing produce the perfect feedstock for BMAD. Don’t rely on one-shot prompts—feed the model real documentation.
  • Exploit BMAD v6 features: Select project scale to right-size the process and use the module system to “bring your own roles” (architect, QA, etc.) that reflect your team’s patterns, standards, and non-negotiables.
  • Optimise for developer empathy: Aim AI at toil (infrastructure drudgery, flaky tests, documentation maintenance) and use spec-driven workflows to keep developers focused on design and quality, not guesswork.
  • Open, portable, and controllable: BMAD’s open-source, model-vendor-agnostic, and IDE-agnostic stance helps teams preserve flexibility, reduce lock-in, and align with existing Node.js/Fastify ecosystems and practices.

Developers building AI-native applications can apply BMAD today to turn discovery into executable context, reduce ambiguity, and ship software that reflects intent—faster and with fewer surprises.