Standardizes the creation and evaluation of high-density Agent Skills (Claude, Cursor, Windsurf). Ensures skills achieve high Activation (specificity/completeness) and Implementation (conciseness/actionability) scores. Use when: writing or auditing SKILL.md, improving trigger accuracy, or refactoring skills to reduce redundancy and maximize token ROI.
82
80%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.github/skills/common/common-skill-creator/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured description with a clear 'Use when' clause, strong trigger terms, and a distinctive niche. Its main weakness is that the capability descriptions lean slightly abstract ('standardizes', 'ensures') rather than listing concrete mechanical actions the skill performs. Overall it's a strong description that would perform well in skill selection.
Suggestions
Replace abstract verbs like 'standardizes' and 'ensures' with more concrete actions, e.g., 'Generates YAML frontmatter, scores descriptions against activation rubrics, and suggests trigger term improvements for Agent Skills.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Agent Skills for Claude, Cursor, Windsurf) and some actions (creation, evaluation, auditing, refactoring), but the actions are somewhat abstract—'standardizes creation' and 'ensures skills achieve high scores' are more aspirational than concrete. It doesn't list specific concrete actions like 'generates frontmatter YAML' or 'validates trigger terms'. | 2 / 3 |
Completeness | Clearly answers both 'what' (standardizes creation and evaluation of Agent Skills, ensures high Activation and Implementation scores) and 'when' with an explicit 'Use when:' clause listing three specific trigger scenarios (writing/auditing SKILL.md, improving trigger accuracy, refactoring skills). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'SKILL.md', 'trigger accuracy', 'Agent Skills', 'Claude', 'Cursor', 'Windsurf', 'auditing', 'refactoring skills', 'token ROI'. These are terms a user working on skill files would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche—meta-skill about writing and evaluating SKILL.md files for AI agents. The specific references to 'SKILL.md', 'Activation/Implementation scores', and 'trigger accuracy' make it very unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
70%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured meta-skill for creating agent skills, with strong progressive disclosure and clear workflows. Its main weaknesses are moderate verbosity (some self-referential explanations that could be trimmed) and incomplete actionability — key operations like 'spawn parallel subagents' and 'run trigger eval queries' lack concrete executable examples or commands. The skill practices much of what it preaches but doesn't fully achieve the token efficiency it advocates.
Suggestions
Add concrete, executable examples for key operations like spawning parallel subagents and running trigger evaluations — currently these are described abstractly
Remove the caveman compression explanation example (Claude understands compression techniques) and instead just state the rule with a before/after inline
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill uses 'Caveman Compression' in places and is generally efficient, but includes some meta-explanations that Claude already knows (e.g., explaining what Token ROI means, explaining why to close DB connections). The caveman compression example itself takes tokens to explain a concept Claude understands. Some sections like 'Anti-Patterns' partially repeat guidance from 'Content Quality'. | 2 / 3 |
Actionability | The workflows provide numbered steps which are helpful, and the quality checklist is concrete. However, the skill lacks executable code examples — the caveman compression example is illustrative but not a real skill artifact. Commands like 'spawn parallel subagents' are vague without specifying how. The description quality section gives good concrete guidance but some items remain abstract (e.g., 'run trigger eval queries, target ≥80% accuracy' without showing how). | 2 / 3 |
Workflow Clarity | Both the 'New skill' and 'Existing skill' workflows are clearly sequenced with numbered steps. The existing skill workflow includes a snapshot/backup step before edits (validation checkpoint), and both workflows include explicit test and evaluate steps with iteration loops. The quality checklist serves as a final validation checkpoint. | 3 / 3 |
Progressive Disclosure | Excellent progressive disclosure with a clear three-level loading system explicitly defined. The References section provides well-signaled, one-level-deep links to 8 separate reference files, each with a clear 'load when' trigger condition. Core content stays in the body while detailed materials are properly externalized. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
metadata_field | 'metadata' should map string keys to string values | Warning |
Total | 9 / 11 Passed | |
4c72e76
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.