Use when a PM has a rough idea or a written PRD that needs to be structured into the standard format with P0/P1/P2-tagged user stories. Auto-discovers the source doc in the project folder. Use --approved to skip the challenge step.
84
76%
Does it follow best practices?
Impact
100%
1.85xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/prd/SKILL.mdReads whatever the PM has — a full PRD, rough notes, or nothing — and produces a structured prd.md. Interviews only for sections that are missing or weak.
/prd --project <name>
/prd --project <name> --approved # skip challenge step
/prd --project <name> --mocks <figma-url> # include Figma mocksProject: --project <name> or .current-project — stop if neither.
Scan docs/projects/<name>/ for any .md file that is not a pipeline output (prd.md, tech-design.md, functional-requirements-ea.md, functional-requirements-ga.md).
Check for design mocks: If --mocks <figma-url> was provided, save it. Otherwise scan the source doc for any Figma URLs. If neither, ask once: "Do you have Figma mocks? If yes, share the URL." Optional — skip if none.
If source doc found: Read it and assess each required section:
| Section | Strong signal | Weak / missing |
|---|---|---|
| Problem Statement | Clear problem, persona, why now | Vague, no persona, or missing |
| Goals | Measurable outcomes | Feature list, or missing |
| Non-Goals | Explicit exclusions | Missing |
| User Stories | Stories with priority signals | No stories, no priorities, or too vague |
For each strong section: extract and use as-is. For each weak or missing section: ask a targeted question to fill it. Only ask what's needed — don't re-interview what's already clear.
If no source doc (full interview): Ask each question one at a time, waiting for the answer:
Skip any question the PM's input already answers clearly.
Using the source doc + any interview answers, fill all sections of template.md:
[NEEDS TAG]US-01 [P0] As a.... IDs must be preserved in all downstream files.<!-- Run /analyze --project <name> to populate this section -->Skip this step if --approved was passed, or the PM says anything like "this is approved", "skip challenge", "looks good".
Otherwise, review the draft as a skeptical engineering lead. Check for:
If issues found: present as a numbered list. Wait for PM response. Update the draft accordingly. If no issues: note "No challenges — PRD looks solid." and proceed.
Write to docs/projects/<name>/prd.md:
---
project: <name>
created: <today's date YYYY-MM-DD>
status: draft
design-mocks: <figma-url or blank>
---"PRD saved to
docs/projects/<name>/prd.md.Review the user stories and priority tags. Reply approved when ready."
If the user replies with approval language ("approved", "lgtm", "looks good", "ship it", etc.):
status: approved in docs/projects/<name>/prd.md/analyze — scans the codebase to fill in technical context (sections 5-6), scores each story for EA vs GA, and appends a Scope Proposal. Run /analyze --project <name> when ready."/analyze's job585e8a6
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.