CtrlK
BlogDocsLog inGet started
Tessl Logo

prd

Use when a PM has a rough idea or a written PRD that needs to be structured into the standard format with P0/P1/P2-tagged user stories. Auto-discovers the source doc in the project folder. Use --approved to skip the challenge step.

84

1.85x
Quality

76%

Does it follow best practices?

Impact

100%

1.85x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/prd/SKILL.md
SKILL.md
Quality
Evals
Security

PRD Skill

Reads whatever the PM has — a full PRD, rough notes, or nothing — and produces a structured prd.md. Interviews only for sections that are missing or weak.

Arguments

/prd --project <name>
/prd --project <name> --approved          # skip challenge step
/prd --project <name> --mocks <figma-url> # include Figma mocks

Project: --project <name> or .current-project — stop if neither.


Steps

Step 1: Find source doc

Scan docs/projects/<name>/ for any .md file that is not a pipeline output (prd.md, tech-design.md, functional-requirements-ea.md, functional-requirements-ga.md).

  • One file found → read it. This is the source doc.
  • Multiple files found → ask: "Found [file1, file2] — which one should I use?"
  • No file found → proceed with full interview (Step 2a).

Check for design mocks: If --mocks <figma-url> was provided, save it. Otherwise scan the source doc for any Figma URLs. If neither, ask once: "Do you have Figma mocks? If yes, share the URL." Optional — skip if none.

Step 2: Assess and fill gaps

If source doc found: Read it and assess each required section:

SectionStrong signalWeak / missing
Problem StatementClear problem, persona, why nowVague, no persona, or missing
GoalsMeasurable outcomesFeature list, or missing
Non-GoalsExplicit exclusionsMissing
User StoriesStories with priority signalsNo stories, no priorities, or too vague

For each strong section: extract and use as-is. For each weak or missing section: ask a targeted question to fill it. Only ask what's needed — don't re-interview what's already clear.

If no source doc (full interview): Ask each question one at a time, waiting for the answer:

  1. "Who is the primary user? Describe their role and context."
  2. "What does success look like 6 months after GA? Give 1–2 measurable outcomes."
  3. "Walk me through the must-haves [P0], important-but-deferrable [P1], and nice-to-haves [P2]."
  4. "What are the explicit non-goals — things NOT in scope?"
  5. "Any known dependencies or open questions before development starts?"
  6. "Any constraints — timeline, compliance, existing systems?"
  7. "Does this involve AI/ML? If yes: model, data source, how quality is measured."

Skip any question the PM's input already answers clearly.

Step 3: Draft the PRD

Using the source doc + any interview answers, fill all sections of template.md:

  • Apply P0/P1/P2 tags to every user story based on PM's language ("must/required/critical" → P0, "should/important" → P1, "nice to have/future" → P2). Hedged language ("maybe", "could", "might") → [NEEDS TAG]
  • Assign each story a stable ID: US-01 [P0] As a.... IDs must be preserved in all downstream files.
  • Leave sections 5-6 as: <!-- Run /analyze --project <name> to populate this section -->
  • If AI/ML is involved, add placeholder under Open Questions: "AI components involved — fill in model, data source, evaluation approach, and failure modes before tech design."

Step 4: Challenge the draft

Skip this step if --approved was passed, or the PM says anything like "this is approved", "skip challenge", "looks good".

Otherwise, review the draft as a skeptical engineering lead. Check for:

  1. Unstated assumptions — infra or behaviors taken for granted
  2. Missing personas — user types touched by the feature with no story
  3. Story contradictions — stories conflicting with each other or with non-goals
  4. Scope vagueness — stories too broad to estimate
  5. Missing edge cases — obvious failure modes not addressed

If issues found: present as a numbered list. Wait for PM response. Update the draft accordingly. If no issues: note "No challenges — PRD looks solid." and proceed.

Step 5: Write file

Write to docs/projects/<name>/prd.md:

---
project: <name>
created: <today's date YYYY-MM-DD>
status: draft
design-mocks: <figma-url or blank>
---

Step 6: Prompt PM

"PRD saved to docs/projects/<name>/prd.md.

Review the user stories and priority tags. Reply approved when ready."

If the user replies with approval language ("approved", "lgtm", "looks good", "ship it", etc.):

  1. Set status: approved in docs/projects/<name>/prd.md
  2. Reply: "PRD approved. Next: /analyze — scans the codebase to fill in technical context (sections 5-6), scores each story for EA vs GA, and appends a Scope Proposal. Run /analyze --project <name> when ready."

Notes

  • Do not invent requirements — only use what came from the source doc or interview
  • Do not pre-split stories into EA/GA — that is /analyze's job
  • Every user story must have a P0/P1/P2 tag and a stable US-XX ID before the file is written
Repository
PagerDuty/ai-forward-planning
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.