Generate PRD and Design Docs from existing codebase through discovery, generation, verification, and review workflow
50
55%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/recipe-reverse-engineer/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (generating PRDs and design documents from code) and hints at a structured workflow, but it lacks explicit trigger guidance ('Use when...'), misses common keyword variations, and doesn't enumerate specific concrete actions beyond naming the workflow phases. It would benefit significantly from a 'Use when...' clause and more specific capability details.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to generate a PRD, product requirements document, design doc, technical specification, or wants to reverse-engineer documentation from an existing codebase.'
Include common keyword variations such as 'product requirements document', 'technical design document', 'TDD', 'spec', 'specification', 'reverse engineer documentation'.
List more specific concrete actions, e.g., 'Analyzes code structure and dependencies, extracts API contracts, documents architecture decisions, and produces formatted PRD and design documents.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (PRD and Design Docs) and mentions a workflow (discovery, generation, verification, review), but doesn't list specific concrete actions like 'analyze code structure', 'extract API contracts', or 'document architecture decisions'. | 2 / 3 |
Completeness | Describes what it does (generate PRD and Design Docs from codebase) but has no explicit 'Use when...' clause or trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'when' is entirely absent here, making it a weak 1-2; scoring 1 because the when is completely missing. | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'PRD', 'Design Docs', and 'codebase', which users might naturally say. However, it misses common variations like 'product requirements document', 'technical design document', 'TDD', 'spec', 'specification', 'documentation from code', or 'reverse engineer docs'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of PRD/Design Docs from an existing codebase is somewhat distinctive, but 'design docs' and 'documentation' could overlap with general documentation skills. The lack of explicit trigger boundaries increases conflict risk. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured orchestration skill with excellent actionability and workflow clarity — every step has concrete agent invocations, explicit quality gates with numeric thresholds, and clear feedback loops. The main weaknesses are moderate verbosity from repeated patterns (especially fullstack mode duplicating standard mode blocks) and the monolithic structure that could benefit from splitting detailed agent invocation templates into separate reference files.
Suggestions
Extract fullstack-mode agent invocation variants into a separate reference file (e.g., FULLSTACK_MODE.md) to reduce repetition in the main skill body
Consider consolidating the repeated verification/review/revision patterns (Steps 3-5 mirror Steps 8-10) into a reusable sub-workflow reference to improve conciseness
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is lengthy but most content is necessary for the complex multi-phase orchestration workflow. However, there's some redundancy — the fullstack mode patterns repeat similar invocation blocks with minor variations, and some explanations (like 'No additional discovery required') could be trimmed. The agent invocation blocks are well-structured but verbose. | 2 / 3 |
Actionability | Every step includes concrete agent invocation blocks with specific subagent_type, description, and prompt templates. Quality gates have explicit numeric thresholds (consistencyScore >= 70, verifiableClaimCount >= 20), trigger conditions are clearly enumerated, and variable passing ($STEP_N_OUTPUT) is explicit throughout. The workflow is copy-paste ready for an orchestrator agent. | 3 / 3 |
Workflow Clarity | The workflow is exceptionally well-sequenced with explicit quality gates at Steps 3 and 8, feedback loops for revision (Steps 5 and 10 with max 2 cycles), human review checkpoints, unit completion checklists, and clear error handling table. The sequential dependency chain (each step's output feeds the next) is clearly documented, and validation is built into every phase. | 3 / 3 |
Progressive Disclosure | The content is a monolithic document (~250 lines) with no references to external files for detailed sub-agent specifications, prompt templates, or examples. The workflow overview section provides a good high-level map, but the fullstack mode variations and per-step agent invocations could be split into separate reference files. No bundle files are provided to support progressive disclosure. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
68ecb4a
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.