Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
84
81%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
A skill for creating new skills and iteratively improving them.
At a high level, the process of creating a skill goes like this:
eval-viewer/generate_review.py script to show the user the results for them to look at, and also let them look at the quantitative metricsYour job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
Cool? Cool.
The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
Use the AskUserQuestion tool aggressively at every decision point. Do not ask open-ended text questions in conversation when structured choices exist. This is the single biggest UX improvement you can make — users juggle multiple windows and may not have looked at this conversation in 20 minutes.
Every AskUserQuestion MUST follow this structure:
(human: ~X min / Claude: ~Y min).Rules:
Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
After extracting answers from conversation history (or asking questions 1-3), use AskUserQuestion to confirm the skill type and testing strategy:
Creating skill "[name]" — here's what I understand so far:
- Purpose: [1-sentence summary]
- Triggers on: [key phrases]
- Output: [format]
RECOMMENDATION: [Objective/Subjective/Hybrid] skill → [suggested testing approach]
Options:
A) Objective output (files, code, data) — set up automated test cases (Recommended if output is verifiable)
B) Subjective output (writing, design) — qualitative human review only
C) Hybrid — automated checks for structure, human review for quality
D) Skip testing for now — just build the skill and iterate by feelThis upfront classification drives the entire evaluation strategy downstream. Get it right here to avoid wasted effort later.
The user's private methodology — their domain rules, workflow decisions, competitive edge — is what makes a skill valuable. No public repo can provide that. But the user shouldn't waste time reinventing infrastructure (API clients, auth flows, rate limiting) when mature tools exist. Prior art research finds building blocks for the infrastructure layer so the skill can focus on encoding the user's unique methodology.
Search these channels in order (use subagents for 4-8 in parallel):
| Priority | Channel | What to search | How |
|---|---|---|---|
| 1 | Conversation history | User's proven workflows, verified API patterns, corrections made during debugging | Grep recent conversations for the service/API name |
| 2 | Local documents & SOPs | User's private methodology, runbooks, existing skills | Search project directory, ~/.claude/CLAUDE.md, ~/.claude/references/ |
| 3 | Installed plugins & MCPs | Already-integrated tools | Check ~/.claude/plugins/, parse installed_plugins.json; check ~/.claude.json for configured MCP servers |
| 4 | skills.sh | Community skills | WebFetch https://skills.sh/?q=<keyword> |
| 5 | Anthropic official plugins | Official/partner plugins | WebFetch https://github.com/anthropics/claude-plugins-official/tree/main/plugins and external_plugins directory |
| 6 | MCP servers on GitHub | Existing MCP servers for the same API | WebSearch "<service-name> MCP server site:github.com" |
| 7 | Official API docs | The target service's own documentation | WebSearch "<service-name> API documentation" or WebFetch the docs URL |
| 8 | npm / PyPI | SDK or CLI packages | npm search <keyword> or curl https://pypi.org/pypi/<name>/json |
Channels 1-3 surface the user's own proven patterns and existing integrations. Channels 4-8 find public infrastructure. The user's private SOP always takes precedence — public tools are building blocks, not replacements. In competitive domains (finance, trading, proprietary operations), the valuable methodology will never be public.
If a public MCP server or skill is found, clone it and verify — don't trust the README:
Decision matrix:
| Finding | Action |
|---|---|
| Mature MCP/SDK handles the infrastructure | Adopt it, build on top — install the MCP, then build the skill as a workflow layer encoding the user's methodology |
| Partial MCP or SDK exists | Extend — use for infrastructure, fill gaps in the skill |
| Public skill covers the same domain | Use for structural inspiration only — public skills in competitive domains are generic by definition. The user's edge is their private SOP |
| Nothing public exists | Build from scratch — validate API access patterns work (auth, endpoints, proxy) before writing the full skill |
| Integration cost > build cost | Build it — a 2-hour custom implementation you own beats a "mature" tool with integration friction and upstream risk |
After research completes, present findings via AskUserQuestion:
Research complete for "[skill-name]". Here's what I found:
[1-2 sentence summary of what exists publicly]
RECOMMENDATION: [ADOPT / EXTEND / BUILD] because [one-line reason]
Options:
A) Adopt [tool/MCP X] for infrastructure, build methodology layer on top (Recommended)
B) Extend [partial tool Y] — use what works, fill gaps in the skill
C) Build from scratch — nothing found matches well enough
D) Show me the detailed findings before I decideWhen in doubt, bias toward adopting mature infrastructure for the plumbing layer and building custom logic for the methodology layer — that's where the value lives.
Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
Based on the user interview, fill in these components:
skill-name/
├── SKILL.md (required)
│ ├── YAML frontmatter (name, description required)
│ └── Markdown instructions
└── Bundled Resources (optional)
├── scripts/ - Executable code for deterministic/repetitive tasks
├── references/ - Docs loaded into context as needed
└── assets/ - Files used in output (templates, icons, fonts)All frontmatter fields except description are optional. Configure skill behavior using these fields between --- markers:
---
name: my-skill
description: What this skill does and when to use it. Use when...
context: fork
agent: general-purpose
argument-hint: [topic]
---| Field | Required | Description |
|---|---|---|
name | No | Display name for the skill. If omitted, uses the directory name. Lowercase letters, numbers, and hyphens only (max 64 characters). |
description | Recommended | What the skill does and when to use it. Claude uses this to decide when to apply the skill. If omitted, uses the first paragraph of markdown content. |
context | No | Set to fork to run in a forked subagent context. See "Inline vs Fork: Critical Decision" below — choosing wrong breaks your skill. |
agent | No | Which subagent type to use when context: fork is set. Options: Explore, Plan, general-purpose, or custom agents from .claude/agents/. Default: general-purpose. |
disable-model-invocation | No | Set to true to prevent Claude from automatically loading this skill. Use for workflows you want to trigger manually with /name. Default: false. |
user-invocable | No | Set to false to hide from the / menu. Use for background knowledge users shouldn't invoke directly. Default: true. |
allowed-tools | No | Pre-approved tools list. Recommendation: Do NOT set this field. Omitting it gives the skill full tool access governed by the user's permission settings. Setting it restricts the skill's capabilities unnecessarily. |
model | No | Model to use when this skill is active. |
argument-hint | No | Hint shown during autocomplete to indicate expected arguments. Example: [issue-number] or [filename] [format]. |
hooks | No | Hooks scoped to this skill's lifecycle. Example: hooks: { pre-invoke: [{ command: "echo Starting" }] }. See Claude Code Hooks documentation. |
Special placeholder: $ARGUMENTS in skill content is replaced with text the user provides after the skill name. For example, /deep-research quantum computing replaces $ARGUMENTS with quantum computing.
This is the most important architectural decision when designing a skill. Choosing wrong will silently break your skill's core capabilities.
CRITICAL CONSTRAINT: Subagents cannot spawn other subagents. A skill running with context: fork (as a subagent) CANNOT:
Decision guide:
| Your skill needs to... | Use | Why |
|---|---|---|
| Orchestrate parallel agents (Task tool) | Inline (no context) | Subagents can't spawn subagents |
| Call other skills (Skill tool) | Inline (no context) | Subagents can't invoke skills |
| Run Bash commands for external CLIs | Inline (no context) | Full tool access in main context |
| Perform a single focused task (research, analysis) | Fork (context: fork) | Isolated context, clean execution |
| Provide reference knowledge (coding conventions) | Inline (no context) | Guidelines enrich main conversation |
| Be callable BY other skills | Fork (context: fork) | Must be a subagent to be spawned |
Example: Orchestrator skill (MUST be inline):
---
name: product-analysis
description: Multi-path parallel product analysis with cross-model synthesis
---
# Orchestrates parallel agents — inline is REQUIRED
1. Auto-detect available tools (which codex, etc.)
2. Launch 3-5 Task agents in parallel (Explore subagents)
3. Optionally invoke /competitors-analysis via Skill tool
4. Synthesize all resultsExample: Specialist skill (fork is correct):
---
name: deep-research
description: Research a topic thoroughly using multiple sources
context: fork
agent: Explore
---
Research $ARGUMENTS thoroughly:
1. Find relevant files using Glob and Grep
2. Read and analyze the code
3. Summarize findings with specific file referencesExample: Reference skill (inline, no task):
---
name: api-conventions
description: API design patterns for this codebase
---
When writing API endpoints:
- Use RESTful naming conventions
- Return consistent error formatsSkills should be orthogonal: each skill handles one concern, and they combine through composition.
Pattern: Orchestrator (inline) calls Specialist (fork)
product-analysis (inline, orchestrator)
├─ Task agents for parallel exploration
├─ Skill('competitors-analysis', 'X') → fork subagent
└─ Synthesizes all results
competitors-analysis (fork, specialist)
└─ Single focused task: analyze one competitor codebaseRules for composability:
context: fork) to use Task/Skill toolscontext: fork to run in isolated subagent contextBeyond orchestrator/specialist composition, skills often form sequential pipelines where one skill's output is the next skill's input. Each skill should proactively suggest the logical next step after completing its work.
Pattern: "Next Step" section at the end of SKILL.md
## Next Step: [Action Description]
After [this skill completes], suggest the natural next skill:
\```
[Summary of what was just accomplished].
Options:
A) [Next skill] — [one-line reason] (Recommended)
B) [Alternative skill] — [when this is better]
C) No thanks — [the current output is sufficient]
\```Real-world pipeline examples:
youtube-downloader → asr-transcribe-to-text → transcript-fixer → meeting-minutes-taker → pdf-creator
deep-research → fact-checker → ppt-creator
doc-to-markdown → docs-cleaner
claude-code-history-files-finder → continue-claude-workRules for pipeline handoff:
When to add a handoff: Ask "does this skill's output commonly become another skill's input?" If yes, add a "Next Step" section. If the connection is rare or forced, don't add one.
Anti-pattern: Chaining skills that don't share a natural data flow. pdf-creator → youtube-downloader makes no sense. The pipeline must follow the user's actual workflow.
Never add manual flags for capabilities that can be auto-detected. Instead of requiring users to pass --with-codex or --verbose, detect capabilities at runtime:
# Good: Auto-detect and inform
Step 0: Check available tools
- `which codex` → If found, inform user and enable cross-model analysis
- `ls package.json` → If found, tailor prompts for Node.js project
- `which docker` → If found, enable container-based execution
# Bad: Manual flags
argument-hint: [scope] [--with-codex] [--docker] [--verbose]Principle: Capabilities auto-detect, user decides scope. A skill should discover what it CAN do and act accordingly, not require users to remember what tools are installed.
| Frontmatter | You can invoke | Claude can invoke | Subagents can use |
|---|---|---|---|
| (default) | Yes | Yes | No (runs inline) |
context: fork | Yes | Yes | Yes |
disable-model-invocation: true | Yes | No | No |
context: fork + disable-model-invocation: true | Yes | No | Yes (when explicitly delegated) |
Skills use a three-level loading system:
Key patterns:
Domain organization: When a skill supports multiple domains/frameworks, organize by variant:
cloud-deploy/
├── SKILL.md (workflow + selection)
└── references/
├── aws.md
├── gcp.md
└── azure.mdClaude reads only the relevant reference file.
This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
Prefer using the imperative form in instructions.
Defining output formats - You can do it like this:
## Report structure
ALWAYS use this exact template:
# [Title]
## Executive summary
## Key findings
## RecommendationsExamples pattern - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
## Commit message format
**Example 1:**
Input: Added user authentication with JWT tokens
Output: feat(auth): implement JWT-based authenticationTry to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
Keep factual dates — they tell readers when information was verified. A skill about Suno v5.5 should say "Suno v5.5 (March 2026)" because without the date, future readers can't judge if the information is still current. Removing dates makes things worse, not better.
What to avoid is conditional logic based on dates ("if before August 2025, use the old API") — that becomes wrong the moment the date passes and nobody updates it.
Rules:
scripts/)Executable code (Python/Bash/etc.) for tasks that require deterministic reliability or are repeatedly rewritten.
scripts/rotate_pdf.py for PDF rotation tasksreferences/)Documentation and reference material intended to be loaded as needed into context to inform Claude's process and thinking.
references/finance.md for financial schemas, references/mnda.md for company NDA templateassets/)Files not intended to be loaded into context, but rather used within the output Claude produces.
assets/logo.png for brand assets, assets/slides.pptx for PowerPoint templatesCRITICAL: Skills intended for public distribution must not contain user-specific or company-specific information:
/home/username/, /Users/username/)~/.claude/skills/scripts/example.py, references/guide.md)~/workspace/project, username, your-company)CRITICAL: Skills should NOT contain version history or version numbers in SKILL.md:
## Version, ## Changelog) in SKILL.mdplugins[].versionFilenames must be self-explanatory without reading contents.
Pattern: <content-type>_<specificity>.md
Examples:
commands.md, cli_usage.md, reference.mdscript_parameters.md, api_endpoints.md, database_schema.mdTest: Can someone understand the file's contents from the name alone?
Anthropic has wrote skill authoring best practices, you SHOULD retrieve it before you create or update any skills, the link is https://platform.claude.com/docs/en/agents-and-tools/agent-skills/best-practices.md
Also read references/skill-development-methodology.md before starting — it covers the full 8-phase development process with prior art research, counter review, and real failure case studies. The two references are complementary: the Anthropic doc covers principles, the methodology covers process.
After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Present them via AskUserQuestion:
Skill draft is ready. Here are [N] test cases I'd like to run:
1. "[test prompt 1]" — tests [what aspect]
2. "[test prompt 2]" — tests [what aspect]
3. "[test prompt 3]" — tests [what aspect]
Each test runs the skill + a baseline (no skill) for comparison.
Estimated time: ~[X] minutes total.
RECOMMENDATION: Run all [N] test cases now.
Options:
A) Run all test cases (Recommended)
B) Run test cases, but let me modify them first
C) Add more test cases before running
D) Skip testing — the skill looks good enough to shipSave test cases to evals/evals.json. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress.
{
"skill_name": "example-skill",
"evals": [
{
"id": 1,
"prompt": "User's task prompt",
"expected_output": "Description of expected result",
"files": []
}
]
}See references/schemas.md for the full schema (including the assertions field, which you'll add later).
This section is one continuous sequence — don't stop partway through. Do NOT use /skill-test or any other testing skill.
Put results in <skill-name>-workspace/ as a sibling to the skill directory. Within the workspace, organize results by iteration (iteration-1/, iteration-2/, etc.) and within that, each test case gets a directory (eval-0/, eval-1/, etc.). Don't create all of this upfront — just create directories as you go.
For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
With-skill run:
Execute this task:
- Skill path: <path-to-skill>
- Task: <eval prompt>
- Input files: <eval files if any, or "none">
- Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
- Outputs to save: <what the user cares about — e.g., "the .docx file", "the final CSV">Baseline run (same prompt, but the baseline depends on context):
without_skill/outputs/.cp -r <skill-path> <workspace>/skill-snapshot/), then point the baseline subagent at the snapshot. Save to old_skill/outputs/.Write an eval_metadata.json for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
{
"eval_id": 0,
"eval_name": "descriptive-name-here",
"prompt": "The user's task prompt",
"assertions": []
}Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in evals/evals.json, review them and explain what they check.
Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment.
Update the eval_metadata.json files and evals/evals.json with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
When each subagent task completes, you receive a notification containing total_tokens and duration_ms. Save this data immediately to timing.json in the run directory:
{
"total_tokens": 84852,
"duration_ms": 23332,
"total_duration_seconds": 23.3
}This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
Once all runs are done:
Grade each run — spawn a grader subagent (or grade inline) that reads agents/grader.md and evaluates each assertion against the outputs. Save results to grading.json in each run directory. The grading.json expectations array must use the fields text, passed, and evidence (not name/met/details or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
Aggregate into benchmark — run the aggregation script from the skill-creator directory:
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>This produces benchmark.json and benchmark.md with pass_rate, time, and tokens for each configuration, with mean +/- stddev and the delta. If generating benchmark.json manually, see references/schemas.md for the exact schema the viewer expects.
Put each with_skill version before its baseline counterpart.
Do an analyst pass — read the benchmark data and surface patterns the aggregate stats might hide. See agents/analyzer.md (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
Launch the viewer with both qualitative outputs and quantitative data:
nohup python <skill-creator-path>/eval-viewer/generate_review.py \
<workspace>/iteration-N \
--skill-name "my-skill" \
--benchmark <workspace>/iteration-N/benchmark.json \
> /dev/null 2>&1 &
VIEWER_PID=$!For iteration 2+, also pass --previous-workspace <workspace>/iteration-<N-1>.
Cowork / headless environments: If webbrowser.open() is not available or the environment has no display, use --static <output_path> to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a feedback.json file when the user clicks "Submit All Reviews". After download, copy feedback.json into the workspace directory for the next iteration to pick up.
Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
Results are ready! I've opened the eval viewer in your browser.
- "Outputs" tab: click through each test case, leave feedback in the textbox
- "Benchmark" tab: quantitative comparison (pass rates, timing, tokens)
Take your time reviewing. When you're done, come back here.
Options:
A) I've finished reviewing — read my feedback and improve the skill
B) I have questions about the results before giving feedback
C) Results look good enough — skip iteration, let's package the skill
D) Results need major rework — let's discuss before iteratingThe "Outputs" tab shows one test case at a time:
The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to feedback.json.
When the user tells you they're done, read feedback.json:
{
"reviews": [
{"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
{"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
{"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
],
"status": "complete"
}Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
Kill the viewer server when you're done with it:
kill $VIEWER_PID 2>/dev/nullThis is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
Generalize from the feedback. The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
Keep the prompt lean. Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
Explain the why. Try hard to explain the why behind everything you're asking the model to do. Today's LLMs are smart. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
Look for repeated work across test cases. Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a create_docx.py or a build_chart.py, that's a strong signal the skill should bundle that script. Write it once, put it in scripts/, and tell the skill to use it. This saves every future invocation from reinventing the wheel.
This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
After analyzing feedback, present your improvement plan via AskUserQuestion:
I've read the feedback from [N] test cases. [X] had specific complaints, [Y] looked good.
Key issues:
- [Issue 1]: [plain-language summary]
- [Issue 2]: [plain-language summary]
RECOMMENDATION: [strategy] because [reason]
Options:
A) Iterative refinement — targeted fixes for the specific issues above (Recommended)
B) Structural redesign — the core approach needs rethinking
C) Bundle a script — I noticed all test runs independently wrote similar code for [X]
D) Expand test set first — add [N] more test cases to avoid overfitting to these examplesAfter improving the skill:
iteration-<N+1>/ directory, including baseline runs. If you're creating a new skill, the baseline is always without_skill (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.--previous-workspace pointing at the previous iterationAt the end of each iteration, use AskUserQuestion as a checkpoint:
Iteration [N] complete. Results: [pass_rate]% assertions passing, [delta vs previous].
Options:
A) Continue iterating — I see more room for improvement
B) Accept this version — it's good enough, let's move to packaging
C) Revert to previous iteration — this round made things worse
D) Run blind comparison — rigorously compare this version vs the previous oneKeep going until:
For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read agents/comparator.md and agents/analyzer.md for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON:
[
{"query": "the user prompt", "should_trigger": true},
{"query": "another prompt", "should_trigger": false}
]The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).
Bad: "Format this data", "Extract text from PDF", "Create a chart"
Good: "ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"
For the should-trigger queries (8-10), think about coverage. You want different phrasings of the same intent — some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.
For the should-not-trigger queries (8-10), the most valuable ones are the near-misses — queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.
The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
Present the eval set to the user for review using the HTML template:
assets/eval_review.html__EVAL_DATA_PLACEHOLDER__ → the JSON array of eval items (no quotes around it — it's a JS variable assignment)__SKILL_NAME_PLACEHOLDER__ → the skill's name__SKILL_DESCRIPTION_PLACEHOLDER__ → the skill's current description/tmp/eval_review_<skill-name>.html) and open it: open /tmp/eval_review_<skill-name>.html~/Downloads/eval_set.json — check the Downloads folder for the most recent version in case there are multiple (e.g., eval_set (1).json)This step matters — bad eval queries lead to bad descriptions.
Tell the user: "This will take some time — I'll run the optimization loop in the background and check on it periodically."
Save the eval set to the workspace, then run in the background:
python -m scripts.run_loop \
--eval-set <path-to-trigger-eval.json> \
--skill-path <path-to-skill> \
--model <model-id-powering-this-session> \
--max-iterations 5 \
--verboseUse the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.
While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.
This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with best_description — selected by test score rather than train score to avoid overfitting.
Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's available_skills list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
Take best_description from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
NEVER edit skills in ~/.claude/plugins/cache/ — that's a read-only cache directory. All changes there are:
ALWAYS verify you're editing the source repository:
# WRONG - cache location (read-only copy)
~/.claude/plugins/cache/daymade-skills/my-skill/1.0.0/my-skill/SKILL.md
# RIGHT - source repository
/path/to/your/claude-code-skills/my-skill/SKILL.mdBefore any edit, confirm the file path does NOT contain /cache/ or /plugins/cache/.
When creating or updating a skill, follow these steps in order. Skip steps only when clearly not applicable.
Before starting any skill work, auto-detect all dependencies and proactively install anything missing. Discovering a missing tool mid-workflow (e.g., gitleaks at packaging time, PyYAML at validation) wastes time and breaks flow.
Run the quick check from references/prerequisites.md, auto-install what you can, and present the user a summary checklist. Only proceed when all blocking dependencies are satisfied.
Key blockers: Python 3, PyYAML (validation/packaging), gitleaks (security scan), claude CLI (evals). All scripts must be invoked via python3 -m scripts.<name> from the skill-creator root directory — direct python3 scripts/<name>.py fails due to relative imports.
Skip this step only when the skill's usage patterns are already clearly understood.
To create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.
For example, when building an image-editor skill, relevant questions include:
To avoid overwhelming users, avoid asking too many questions in a single message.
Analyze each example by:
Match specificity to task risk:
Skip this step if the skill already exists.
When creating a new skill from scratch, always run the init_skill.py script:
scripts/init_skill.py <skill-name> --path <output-directory>The script creates a template skill directory with proper frontmatter, resource directories, and example files.
When editing, remember that the skill is being created for another instance of Claude to use. Focus on information that would be beneficial and non-obvious to Claude.
When updating an existing skill: Scan all existing reference files to check if they need corresponding updates.
Pipeline check: Consider whether this skill's output naturally feeds into another skill. If so, add a "Next Step" handoff section (see "Pipeline Handoff" in the Skill Writing Guide). Also check if any existing skill should chain into this one.
Use AskUserQuestion before executing this step:
This skill appears to contain content from a real project.
Before distribution, I should check for business-specific details
(company names, internal paths, product names) that shouldn't be public.
RECOMMENDATION: Run selective sanitization — review each finding before removing.
Options:
A) Full sanitization — automatically remove all business-specific content
B) Selective sanitization — show me each finding and let me decide (Recommended)
C) Skip — this is for internal use only, no sanitization neededSkip if: skill was created from scratch for public use, user declines, or skill is for internal use.
Sanitization process:
Before packaging or distributing a skill, run the security scanner to detect hardcoded secrets and personal information:
# Required before packaging
python scripts/security_scan.py <path/to/skill-folder>
# Verbose mode includes additional checks for paths, emails, and code patterns
python scripts/security_scan.py <path/to/skill-folder> --verboseDetection coverage:
First-time setup: Install gitleaks if not present:
# macOS
brew install gitleaks
# Linux/Windows - see script output for installation instructionsExit codes:
0 - Clean (safe to package)1 - High severity issues2 - Critical issues (MUST fix before distribution)3 - gitleaks not installed4 - Scan errorIf issues are found, present them via AskUserQuestion:
Security scan found [N] issues in "[skill-name]":
- [SEVERITY] [file]: [description]
- ...
RECOMMENDATION: Fix automatically — these look like [accidental leaks / false positives].
Options:
A) Fix all issues automatically (Recommended)
B) Review each finding — let me decide per-item (some may be intentional)
C) Override and proceed — I accept the risk for internal distributionOnce the skill is ready, package it into a distributable file:
scripts/package_skill.py <path/to/skill-folder>Optional output directory:
scripts/package_skill.py <path/to/skill-folder> ./distThe packaging script will:
If validation fails, the script reports errors and exits without creating a package.
After packaging, update the marketplace registry to include the new or updated skill.
For new skills, add an entry to .claude-plugin/marketplace.json:
{
"name": "skill-name",
"description": "Copy from SKILL.md frontmatter description",
"source": "./",
"strict": false,
"version": "1.0.0",
"category": "developer-tools",
"keywords": ["relevant", "keywords"],
"skills": ["./skill-name"]
}For updated skills, bump the version in plugins[].version following semver.
After completing the skill, use AskUserQuestion to determine next steps:
Skill "[name]" is complete. Security scan passed, marketplace updated.
Options:
A) Package and export as .skill file for distribution
B) Run description optimization — improve auto-triggering accuracy (~5 min)
C) Expand test set and iterate more — add edge cases before shipping
D) Done for now — I'll test it manually and come back if neededAfter testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed.
Refinement filter: Only add what solves observed problems. If best practices already cover it, don't duplicate.
present_files tool is available)Check whether you have access to the present_files tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
python -m scripts.package_skill <path/to/skill-folder>After packaging, direct the user to the resulting .skill file path so they can install it.
In Claude.ai, the core workflow is the same (draft -> test -> review -> improve -> repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
Running test cases: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
Reviewing results: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
Benchmarking: Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.
The iteration loop: Same as before — improve the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.
Description optimization: This section requires the claude CLI tool (specifically claude -p) which is only available in Claude Code. Skip it if you're on Claude.ai.
Blind comparison: Requires subagents. Skip it.
Packaging: The package_skill.py script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting .skill file.
name frontmatter field — use them unchanged. E.g., if the installed skill is research-helper, output research-helper.skill (not research-helper-v2)./tmp/skill-name/, edit there, and package from the copy./tmp/ first, then copy to the output directory — direct writes may fail due to permissions.If you're in Cowork, the main things to know are:
--static <output_path> to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.generate_review.py (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER BEFORE evaluating inputs yourself. You want to get them in front of the human ASAP!feedback.json as a file. You can then read it from there (you may have to request access first).package_skill.py just needs Python and a filesystem.run_loop.py / run_eval.py) should work in Cowork just fine since it uses claude -p via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape.The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
agents/grader.md — How to evaluate assertions against outputsagents/comparator.md — How to do blind A/B comparison between two outputsagents/analyzer.md — How to analyze why one version beat anotherThe references/ directory has additional documentation:
references/schemas.md — JSON structures for evals.json, grading.json, benchmark.json, etc.references/sanitization_checklist.md — Checklist for sanitizing business-specific content before public distributionRepeating one more time the core loop here for emphasis:
eval-viewer/generate_review.py to help the user review themPlease add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run eval-viewer/generate_review.py so human can review test cases" in your TodoList to make sure it happens.
Good luck!
5c9eda4
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.