Example project-specific skill template based on a real production application.
40
40%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a meta-label rather than a functional skill description. It provides no information about what the skill does, when it should be used, or what domain it applies to. It would be impossible for Claude to correctly select this skill from a list of available skills.
Suggestions
Replace the meta-description with concrete actions the skill performs (e.g., 'Generates API endpoints, configures database models, and scaffolds service layers for the XYZ application').
Add an explicit 'Use when...' clause with natural trigger terms that describe the scenarios and user requests that should activate this skill.
Specify the domain, technology stack, or project name to make the skill clearly distinguishable from other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Example project-specific skill template' is entirely abstract and does not describe any capabilities. | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. There is no 'Use when...' clause and no description of functionality. | 1 / 3 |
Trigger Term Quality | There are no natural keywords a user would say. Terms like 'template', 'project-specific', and 'production application' are generic meta-language, not actionable trigger terms. | 1 / 3 |
Distinctiveness Conflict Risk | The description is so generic it could apply to virtually any skill. 'Project-specific skill template based on a real production application' provides zero distinguishing information. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is highly actionable with excellent, executable code examples across the full stack, but it is far too verbose for a SKILL.md file—it inlines extensive boilerplate patterns that Claude already knows how to produce (standard fetch wrappers, basic React hooks, pytest fixtures). The content would benefit greatly from being split into referenced sub-files, keeping SKILL.md as a concise overview with project-specific constraints and critical rules, while moving generic code patterns to separate files.
Suggestions
Move generic code patterns (API response format, fetch wrapper, useApi hook, test structures) into separate referenced files like `code-patterns.md` and keep only project-specific deviations from standard patterns in SKILL.md.
Remove explanations of standard technologies Claude already understands (e.g., what App Router is, how fetch works) and focus only on project-specific conventions and constraints.
Add validation/verification steps to the deployment workflow: e.g., 'After deploy, verify with `curl https://api.example.com/health` and confirm 200 response' with error recovery guidance.
Consolidate the 'Critical Rules' section to be more prominent (move it higher) since these project-specific constraints are the highest-value content that Claude wouldn't know otherwise.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | This is extremely verbose at ~250+ lines. It explains basic concepts Claude already knows (what App Router is, how fetch works, basic React hooks patterns, standard pytest fixtures). The architecture diagram, while nice, adds significant token cost. Much of this is boilerplate that Claude can generate on demand rather than needing to be loaded into context. | 1 / 3 |
Actionability | The skill provides fully executable, copy-paste ready code examples across Python (FastAPI, pytest, Claude API), TypeScript (fetch wrapper, React hooks, tests), and deployment commands. All code is concrete and complete, not pseudocode. | 3 / 3 |
Workflow Clarity | The deployment workflow has a checklist and commands, but lacks validation checkpoints and feedback loops. There's no 'if build fails, do X' guidance, no verification step after deployment (e.g., smoke test), and the testing section lists commands without integrating them into a development workflow sequence. | 2 / 3 |
Progressive Disclosure | The skill references related files at the bottom (coding-standards.md, backend-patterns.md, etc.), but the main content is a monolithic wall of inline code examples that could be split into separate reference files. The architecture overview, all code patterns, testing examples, and deployment details are all inlined when they could be referenced. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Reviewed
Table of Contents