Creates high-quality agent skills by running the official Claude skill creator plugin and then applying a structured set of enhancement phases covering use-case design, frontmatter completeness, body structure, writing quality, and progressive disclosure architecture. Use this skill when the user wants to create a new skill, improve an existing SKILL.md, or asks "how do I write a good skill", "help me make a skill for X", "review my SKILL.md", or "turn this workflow into a skill". Trigger even if the user just says "let's make a skill" or describes a workflow they want to capture.
94
94%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
A great skill is not just documentation — it's a set of instructions that drives an agent to a concrete outcome without requiring the user to direct every step. This skill layers structured enhancement phases on top of the official Claude skill creator to produce skills that are well-designed, properly structured, and immediately useful.
The official skill creator plugin handles the core loop of draft → test → evaluate → improve. These phases pick up where it leaves off, adding the craft that turns a functional skill into an excellent one.
Begin every new skill by running the official Claude skill creator plugin: https://claude.com/plugins/skill-creator
It handles:
Complete this phase fully before moving to the enhancement phases below. The enhancements are most useful once a working draft exists.
Review the draft skill for 2–3 concrete use-cases. Each use-case must answer three questions:
The step pattern that works for most agentic skills:
1. Prepare — gather what's needed, verify prerequisites
2. Execute — run the main action
3. Analyze — understand what the output means
4. Fix/Act — take the action that resolves the finding or advances the goal
5. Verify — confirm the action worked
6. Repeat steps 3–5 — loop until the done condition is metThe loop (execute → analyze → fix → verify → repeat) is what makes a skill actually useful rather than a prompt that stops after one command. Without an explicit repeat step, Claude will often report findings and wait rather than driving to resolution.
Done when: Every use-case in the skill has a goal, numbered steps including a verify/repeat step, and an explicit done condition.
Every SKILL.md frontmatter must include all required fields and should include the recommended optional fields. Check each one:
name (required)
description (required) — the primary triggering mechanism
< or > characters anywhere in the value)license (recommended for open-source skills)
MIT, Apache-2.0compatibility (recommended)
metadata (recommended)
author and versionmcp-server if the skill depends on an MCP serverDone when: All required fields are present and valid; optional fields are filled in where they add useful context for the user or the agent.
A SKILL.md body that follows a predictable structure is easier for Claude to navigate and easier for users to read and contribute to.
Required sections, in order:
# [Skill Name] — top-level heading
# Instructions — signals to Claude where executable instructions begin
### Step N: [Step name] — numbered steps for the main workflow. Steps should
be at the use-case level (Step 1: Identify scope, Step 2: Execute, Step 3: Monitor),
with use-case variants nested underneath each step as needed.
## Examples — concrete scenarios in this exact format:
User says: "[realistic user phrase]"
Actions:
1. [what Claude does first]
2. [what Claude does next]
...
Result: [what the user ends up with — stated as an outcome]Include 3–5 examples that cover: a broad/ambiguous request, a specific targeted request, and at least one edge case or less-obvious trigger. The "User says" phrase should sound like a real person talking, not a formal API call.
## Troubleshooting — common failure modes in this exact format:
Error: [the error message or failure symptom]
Cause: [why it happens]
Solution: [how to fix it]Cover authentication failures, missing prerequisites, and any exit codes or error states that are commonly misunderstood.
Done when: The skill body has all four section types in order, with at least 3 examples and the most common failure modes documented.
The body is structured. Now make it good.
Outcome-focused positioning
The opening paragraph (and the README if there is one) must lead with the problem being solved and the outcome the user gets — not a description of the tool's features or architecture.
Explain the why, not just the what
For every instruction in the skill, there should be a reason. Claude is smart enough to generalize from a well-explained principle; it doesn't need a rule for every edge case. When you catch yourself writing "ALWAYS" or "NEVER" in all caps, that's a signal to instead explain why the behavior matters — Claude will apply it more intelligently.
Imperative form
Instructions should be direct commands: "Run the scan", "Prioritize Critical and High severity first", "Re-run to confirm the finding is gone." Not: "You should consider running the scan" or "It may be helpful to prioritize..."
Lean prompts
Remove anything that isn't pulling its weight. If a sentence doesn't change what Claude does, cut it. Long skills aren't more powerful — they're harder to follow and more likely to cause Claude to lose the thread.
Done when: The opening positions the skill around outcomes. Instructions use imperative form and explain the why. The skill body has no filler sentences.
Skills have three loading levels:
If the SKILL.md body is approaching 500 lines, the skill needs a second layer
of hierarchy. Move domain-specific content into a references/ folder and replace
it in SKILL.md with a pointer and a clear statement of when to read it.
Reference file organization:
skill-name/
├── SKILL.md
└── references/
├── domain-a.md # read when working on domain A
├── domain-b.md # read when working on domain B
└── domain-c.md # read when working on domain CEach reference file should have a table of contents if it exceeds 300 lines.
SKILL.md should tell Claude exactly when to read each file — not just list them,
but say "read references/domain-a.md when the user is asking about X."
Done when: SKILL.md is under 500 lines. Any domain-specific detail that would
push it over lives in references/. Every reference file is pointed to with
explicit guidance on when to load it.
Example 1: Creating a skill from scratch
User says: "I want to make a skill for deploying to AWS using our internal scripts."
Actions:
references/ and keep SKILL.md as the workflow routerResult: A complete, well-structured skill that an agent can use to carry a deploy from start to finish without step-by-step user guidance.
Example 2: Reviewing and improving an existing skill
User says: "Can you review my SKILL.md and make it better?"
Actions:
Result: The skill has a validated frontmatter, structured body with examples and troubleshooting, outcome-focused positioning, and stays under 500 lines with domain detail in reference files.
Example 3: Capturing a workflow the user just demonstrated
User says: "We just walked through how I handle PR reviews — turn that into a skill."
Actions:
Result: The demonstrated workflow is captured as a reusable skill that another agent can follow without the user re-explaining the process.
Problem: The skill triggers too rarely — Claude doesn't use it when it should. Cause: The description is too narrow or too abstract. It describes what the skill does mechanically but doesn't list the natural-language phrases users actually say. Solution: Apply Phase 3. Rewrite the description to include 4–6 specific user phrases. Add a "trigger even if the user doesn't mention [tool] by name" clause. Re-run Phase 1 evals to validate the updated description.
Problem: The skill triggers but Claude stops after the first action and reports back instead of driving to completion. Cause: The skill's use-cases don't have an explicit verify/repeat loop or done condition. Claude defaults to one-shot behavior unless told to loop. Solution: Apply Phase 2. Add a verify step and a "Repeat steps N–M until [done condition]" instruction to each use-case.
Problem: The SKILL.md is getting long and hard to maintain.
Cause: Domain-specific detail is living in the main skill file instead of reference
files.
Solution: Apply Phase 6. Identify which sections are domain-specific (e.g., one
per language, cloud provider, or tool variant), move each to references/, and
replace with a one-line pointer in SKILL.md.
Problem: The description passes the char limit check but still feels vague. Cause: The description is feature-focused ("supports four scanning modes") rather than outcome-focused and trigger-phrase rich. Solution: Rewrite using the structure: [problem solved for user] + [how] + [specific phrases that should trigger this]. Cut technical architecture language. Add the scenarios a real user would describe.