Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
85
81%
Does it follow best practices?
Impact
88%
1.87xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description that clearly articulates what the skill does (create, modify, evaluate, and optimize skills) and when to use it (with an explicit 'Use when...' clause covering multiple trigger scenarios). The language is specific, uses third person voice correctly, and targets a distinct niche that is unlikely to conflict with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Create new skills', 'modify and improve existing skills', 'measure skill performance', 'run evals', 'benchmark skill performance with variance analysis', 'optimize a skill's description for better triggering accuracy'. | 3 / 3 |
Completeness | Clearly answers both 'what' (create, modify, improve, measure skills) and 'when' with an explicit 'Use when...' clause listing specific trigger scenarios like creating from scratch, editing, running evals, benchmarking, and optimizing descriptions. | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'create a skill', 'edit', 'optimize', 'evals', 'benchmark', 'skill performance', 'triggering accuracy', 'description'. These cover a good range of terms a user working with skills would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | The description targets a very specific meta-domain — skill creation, editing, evaluation, and optimization — which is a clear niche unlikely to conflict with other skills. Terms like 'skill', 'evals', 'triggering accuracy', and 'variance analysis' are highly distinctive. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides excellent actionability and workflow clarity with concrete commands, JSON schemas, and a well-structured iteration loop with validation checkpoints. However, it is significantly over-verbose with conversational filler, repeated emphasis of the same points, and inline content that should be split into reference files. The casual tone ('Cool? Cool.', 'Sorry in advance but I'm gonna go all caps here') adds personality but wastes tokens in a context-window-constrained environment.
Suggestions
Cut conversational filler ('Cool? Cool.', 'Good luck!', apologetic asides) and remove the repeated restatements of the core loop — state it once clearly at the top and reference it rather than repeating it three times.
Move environment-specific sections (Claude.ai instructions, Cowork instructions) into separate reference files (e.g., references/claude-ai.md, references/cowork.md) and reference them from SKILL.md with one-line pointers.
Remove the 'Communicating with the user' section — Claude already understands audience adaptation; replace with a single line like 'Adapt technical terminology to the user's apparent expertise level.'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~500+ lines with significant conversational padding ('Cool? Cool.'), extensive explanations of concepts Claude already knows (what skills are, how to communicate with users, what PDFs are), and repeated emphasis blocks restating the same core loop multiple times. The casual tone adds tokens without adding clarity. | 1 / 3 |
Actionability | Despite the verbosity, the skill provides highly concrete, executable guidance: specific CLI commands, exact JSON schemas, file path conventions, step-by-step sequences with actual code blocks, and precise instructions for tools like generate_review.py, aggregate_benchmark, and package_skill. The guidance is copy-paste ready throughout. | 3 / 3 |
Workflow Clarity | The multi-step workflow is clearly sequenced with explicit validation checkpoints: spawn runs → draft assertions while waiting → capture timing → grade → aggregate → launch viewer → collect feedback → iterate. Each step has clear inputs/outputs, and the feedback loop (review → improve → rerun) is well-defined with error recovery guidance. | 3 / 3 |
Progressive Disclosure | The skill references external files appropriately (agents/grader.md, agents/comparator.md, agents/analyzer.md, references/schemas.md) with clear guidance on when to read them. However, the SKILL.md itself is monolithic and contains substantial content that could be split into reference files — the description optimization section, Claude.ai-specific instructions, and Cowork-specific instructions could each be separate files, keeping the main skill leaner. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
431bfad
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.