Run after PM approves prd.md. Scans the codebase to fill sections 5-6, proposes EA/GA scope cut, and appends Scope Proposal to prd.md in one pass.
86
81%
Does it follow best practices?
Impact
92%
2.42xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Quality
Discovery
85%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is strong in specificity, completeness, and distinctiveness—it clearly defines a narrow workflow step with explicit trigger conditions and concrete actions. Its main weakness is that the trigger terms are process-specific jargon (EA/GA, sections 5-6, prd.md) which may not match how users naturally phrase requests, though this may be appropriate for an internal workflow skill.
Suggestions
Consider adding more natural language trigger terms a user might say, such as 'scope planning', 'technical feasibility', or 'implementation scoping' to improve discoverability.
Briefly clarify what 'sections 5-6' refer to (e.g., 'technical constraints and implementation details') so the description is self-contained for skill selection.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Scans the codebase to fill sections 5-6', 'proposes EA/GA scope cut', and 'appends Scope Proposal to prd.md in one pass'. These are detailed, actionable steps. | 3 / 3 |
Completeness | Clearly answers both 'what' (scans codebase, fills sections 5-6, proposes EA/GA scope cut, appends Scope Proposal) and 'when' ('Run after PM approves prd.md'). The trigger condition is explicit and unambiguous. | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'prd.md', 'PM approves', 'scope cut', 'EA/GA', and 'Scope Proposal', but these are fairly domain-specific jargon. Missing more natural user-facing trigger terms a user might say (e.g., 'technical design', 'implementation plan', 'scope planning'). | 2 / 3 |
Distinctiveness Conflict Risk | Highly specific niche: it targets a particular workflow step (post-PM-approval of prd.md), specific sections (5-6), and a specific output (Scope Proposal appended to prd.md). Very unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, highly actionable skill with a clear multi-step workflow, explicit validation checkpoints, and concrete output templates. Its main weakness is moderate verbosity — some sections explain scanning patterns and scoring criteria at length when more concise guidance would suffice. The lack of bundle files means all content is inline, which is acceptable but could benefit from splitting the output template and scanning heuristics into separate references.
Suggestions
Consider extracting the Scope Proposal markdown template (Step 5) into a separate SCOPE_TEMPLATE.md file to reduce inline bulk and make the template easier to maintain.
Tighten Step 2's scanning guidance — the list of file types to scan (route/controller, schema/migration, etc.) could be condensed into a single line since Claude already knows how to navigate codebases.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient and avoids explaining concepts Claude already knows, but some sections are verbose — e.g., the detailed explanation of what to scan in Step 2 (route/controller files, schema/migration files, etc.) and the lengthy scope scoring criteria in Step 4 could be tightened. The inline examples and templates are useful but add bulk. | 2 / 3 |
Actionability | The skill provides highly concrete, executable guidance: specific file paths (docs/projects/<name>/prd.md), exact markdown templates for the Scope Proposal output, specific MCP tool names (mcp__claude_ai_Figma__get_design_context, GitHub MCP), clear preconditions, and detailed examples of findings format. Every step has clear, copy-paste-ready outputs. | 3 / 3 |
Workflow Clarity | The 6-step workflow is clearly sequenced with explicit validation checkpoints: precondition check (status: approved), feasibility flags after Step 3, verification that no ⚡ items remain before locking scope, and clear stop conditions (missing repo, missing GitHub MCP). The feedback loop for 'Needs Discussion' items requiring resolution before proceeding is well-defined. | 3 / 3 |
Progressive Disclosure | The skill is a single monolithic file with no bundle files or references to supporting documents. While the content is well-organized with clear headers, the Scope Proposal template and detailed scanning guidance could potentially be split into referenced files. For a skill of this length (~150 lines of substantive content), some separation would improve navigability. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
585e8a6
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.