Generate PRD and Design Docs from existing codebase through discovery, generation, verification, and review workflow
63
55%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/recipe-reverse-engineer/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (generating PRDs and design documents from code) and hints at a structured workflow, but lacks explicit trigger guidance ('Use when...') and specific concrete actions. It would benefit from expanded trigger terms covering common synonyms and a clear statement of when Claude should select this skill.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to generate a PRD, product requirements document, design doc, technical specification, or wants to reverse-engineer documentation from an existing codebase.'
Expand trigger terms to include natural variations: 'product requirements document', 'technical design document', 'TDD', 'spec', 'specification', 'document existing code', 'reverse engineer documentation'.
List more specific concrete actions, e.g., 'Analyzes code structure, extracts module dependencies, identifies API contracts, and produces structured PRD and design documents through a discovery, generation, verification, and review workflow.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (PRD and Design Docs) and mentions a workflow (discovery, generation, verification, review), but doesn't list specific concrete actions like 'analyze code structure', 'extract API contracts', or 'document architecture decisions'. | 2 / 3 |
Completeness | Describes what it does (generate PRD and Design Docs from codebase) but has no explicit 'Use when...' clause or trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'when' is entirely absent, not even implied beyond the basic what statement. | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'PRD', 'Design Docs', and 'codebase', which users might naturally say. However, it misses common variations like 'product requirements document', 'technical design document', 'TDD', 'spec', 'specification', 'documentation from code', or 'reverse engineer docs'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of PRD/Design Docs from existing codebase is somewhat distinctive, but 'Design Docs' is broad enough to overlap with general documentation skills, and 'codebase' could conflict with code analysis or documentation generation skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted orchestration skill with excellent workflow clarity and actionability. The agent invocation templates are specific and complete, quality gates have concrete numeric thresholds, and error handling is well-defined. The main weakness is length — some patterns are repeated (standard vs fullstack mode, Steps 5 and 10 are nearly identical) — though this is partly inherent to the complexity of the workflow being described.
Suggestions
Extract the repeated revision step pattern (Steps 5 and 10 are identical except for doc type and agent type) into a shared reference or note that Step 10 follows the same pattern as Step 5, reducing duplication.
Consider moving the fullstack mode details (Steps 7a/7b and their verification/review variants) into a separate FULLSTACK.md reference file, keeping the main skill focused on the standard flow.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is lengthy (~300 lines) but most content is structural workflow definition with agent invocation templates. Some redundancy exists (e.g., fullstack mode repeats similar patterns for backend/frontend, trigger conditions are restated identically for Steps 5 and 10), but overall the content is necessary for an orchestration workflow of this complexity. | 2 / 3 |
Actionability | Highly actionable with specific agent invocation templates including subagent_type, exact prompt structures, variable passing conventions, quality gates with numeric thresholds (consistencyScore >= 70, verifiableClaimCount >= 20), and concrete trigger conditions. Every step has copy-paste-ready agent invocation blocks. | 3 / 3 |
Workflow Clarity | Excellent sequential workflow with explicit quality gates, validation checkpoints (consistency scores, verifiable claim counts), feedback loops (max 2 revision cycles with human escalation), clear phase/step numbering, and checklists for unit completion. Error handling table covers failure modes with specific actions. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear headers and a workflow overview diagram, but everything is in a single monolithic file. The fullstack mode details (Steps 7a/7b) and Phase 2 could potentially be split into separate reference files. However, for an orchestration skill, having the complete workflow visible is defensible. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
2e719be
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.