Generate a project-level CLAUDE.md from stack detection and user-selected rule categories. Use when starting a new project, onboarding a repo, or when the user says "seed claude.md", "create project rules", "set up CLAUDE.md", "configure this project for me", or wants to establish coding conventions.
87
83%
Does it follow best practices?
Impact
92%
1.26xAverage score across 3 eval scenarios
Passed
No known issues
Quality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description with excellent trigger term coverage and clear completeness, explicitly stating both what the skill does and when to use it. The main weakness is that the 'what' portion could be slightly more specific about the concrete outputs or actions involved (e.g., detecting languages/frameworks, generating linting rules, style conventions). Overall, it would perform well in a multi-skill selection scenario.
Suggestions
Add more specific concrete actions to the 'what' portion, e.g., 'detects languages and frameworks, generates linting rules, style conventions, and project structure guidelines' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (project-level CLAUDE.md generation) and mentions 'stack detection' and 'user-selected rule categories' as mechanisms, but doesn't list multiple concrete output actions (e.g., what specific rules or conventions are generated, what the file contains). | 2 / 3 |
Completeness | Clearly answers both 'what' (generate a project-level CLAUDE.md from stack detection and user-selected rule categories) and 'when' (explicit 'Use when...' clause with multiple trigger scenarios and exact phrases). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms: 'seed claude.md', 'create project rules', 'set up CLAUDE.md', 'configure this project for me', 'coding conventions', 'onboarding a repo', 'starting a new project'. These are phrases users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | Very distinct niche — generating CLAUDE.md files specifically via stack detection and rule categories. The trigger terms like 'seed claude.md' and 'create project rules' are highly specific and unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, well-structured skill with excellent workflow clarity and actionability. The step-by-step process is concrete and includes proper feedback loops. Minor weaknesses include some redundancy between the Principles section and the construction constraints in Step 4, and the Principles section itself could be folded into the workflow rather than stated separately as meta-commentary.
Suggestions
Consider folding the Principles section directly into the construction constraints of Step 4 to eliminate redundancy and save tokens.
Remove the empirical validation claim ('10.2% higher on compliance') — it's not actionable and adds tokens without changing behavior.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient and avoids explaining basic concepts, but the Principles section restates things that could be more tightly integrated into the workflow itself. The empirical validation claim ('10.2% higher') adds a token without being actionable. Some redundancy between principles and construction constraints. | 2 / 3 |
Actionability | The skill provides concrete, executable guidance throughout: specific files to read for stack detection, exact markdown templates for generated sections, explicit construction constraints (under 50 lines, situation→action pattern), and clear AskUserQuestion interaction patterns. The example CLAUDE.md sections are copy-paste ready. | 3 / 3 |
Workflow Clarity | The 5-step workflow is clearly sequenced with explicit checkpoints: detect → select categories → gather constraints → generate → present and refine. Step 5 includes a feedback loop (refine → re-present) and clear terminal conditions. Each step has concrete substeps and the overall flow is unambiguous. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear sections (Principles, Presentation, Workflow, Boundaries), but everything is in a single file with no references to external resources. The inline markdown templates in Step 4 are somewhat lengthy and could be split into a separate templates file, though for a skill of this size it's borderline acceptable. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
e876ef5
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.