CtrlK
BlogDocsLog inGet started
Tessl Logo

skill-creator

Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.

87

Quality

85%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly articulates what the skill does (create, modify, evaluate, and optimize skills) and when to use it (with an explicit 'Use when...' clause covering multiple trigger scenarios). The description is concise, uses third-person voice, includes natural trigger terms, and occupies a distinct niche that minimizes conflict risk with other skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Create new skills', 'modify and improve existing skills', 'measure skill performance', 'run evals', 'benchmark skill performance with variance analysis', 'optimize a skill's description for better triggering accuracy'.

3 / 3

Completeness

Clearly answers both 'what' (create, modify, improve, measure skills) and 'when' with an explicit 'Use when...' clause listing specific trigger scenarios like creating from scratch, editing, running evals, benchmarking, and optimizing descriptions.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'create a skill', 'edit', 'optimize', 'evals', 'benchmark', 'skill performance', 'triggering accuracy', 'description'. These cover a good range of terms a user working with skills would naturally use.

3 / 3

Distinctiveness Conflict Risk

The description targets a very specific meta-domain — skill creation, editing, evaluation, and optimization — which is a clear niche unlikely to conflict with other skills. Terms like 'skill', 'evals', 'triggering accuracy', and 'variance analysis' are highly distinctive.

3 / 3

Total

12

/

12

Passed

Implementation

70%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a comprehensive orchestration skill with excellent actionability, workflow clarity, and progressive disclosure. Its main weakness is extreme verbosity — it's roughly 3-4x longer than necessary, with repeated explanations of the core loop, conversational asides, extensive context about user communication styles, and explanations of concepts Claude already understands. The content is high-quality but would benefit significantly from aggressive trimming.

Suggestions

Remove the three separate repetitions of the core loop (intro, end of iteration section, and final emphasis section) — state it once clearly and reference it.

Cut the 'Communicating with the user' section significantly — Claude understands audience adaptation; a single sentence about adjusting jargon to user sophistication suffices.

Remove conversational filler ('Cool? Cool.', 'Good luck!', 'Sorry in advance but I'm gonna go all caps here') and explanatory padding that doesn't add actionable information.

Consolidate the Claude.ai-specific and Cowork-specific sections into a single 'Environment Adaptations' table rather than prose paragraphs that repeat similar information.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~700+ lines. It explains many concepts Claude already knows (what JSON is, how subagents work, basic file organization), includes conversational filler ('Cool? Cool.', 'Good luck!'), and repeats the core loop multiple times. Significant portions could be trimmed without losing actionable content.

1 / 3

Actionability

The skill provides highly concrete, executable guidance throughout: specific CLI commands for packaging, exact JSON schemas for eval files, complete bash commands for running scripts, precise file path conventions, and copy-paste ready code blocks for every step of the workflow.

3 / 3

Workflow Clarity

The multi-step process is clearly sequenced with numbered steps (Step 0 through Step 9), explicit validation checkpoints (security scan before packaging, validation before distribution), feedback loops (iterate until satisfied), and clear decision points using AskUserQuestion at every stage. Error recovery is addressed (revert to previous iteration option).

3 / 3

Progressive Disclosure

Content is well-organized with clear references to external files: agents/grader.md, agents/comparator.md, agents/analyzer.md, references/schemas.md, references/sanitization_checklist.md, and workflows/wrapper-skill/workflow.md. References are one level deep, clearly signaled with descriptions of when to read them, and the main file serves as an effective overview and orchestrator.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (1140 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
daymade/claude-code-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.