CtrlK
BlogDocsLog inGet started
Tessl Logo

skill-creator

Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, update or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.

89

1.90x
Quality

85%

Does it follow best practices?

Impact

95%

1.90x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong, well-crafted description that clearly communicates both what the skill does and when it should be used. It lists multiple concrete actions, includes natural trigger terms users would employ, and occupies a distinct niche that minimizes conflict risk with other skills. The explicit 'Use when...' clause with varied trigger scenarios is particularly effective.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'create new skills', 'modify and improve existing skills', 'measure skill performance', 'run evals', 'benchmark skill performance with variance analysis', 'optimize a skill's description for better triggering accuracy'.

3 / 3

Completeness

Clearly answers both 'what' (create, modify, improve, measure skills) and 'when' with an explicit 'Use when...' clause listing specific trigger scenarios like creating from scratch, updating, running evals, benchmarking, and optimizing descriptions.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'create a skill', 'update', 'optimize', 'evals', 'benchmark', 'skill performance', 'triggering accuracy', 'description'. These cover a good range of terms a user working with skills would naturally use.

3 / 3

Distinctiveness Conflict Risk

The description targets a very specific meta-domain — skill creation, modification, evaluation, and optimization — which is a clear niche unlikely to conflict with other skills. Terms like 'evals', 'variance analysis', and 'triggering accuracy' are highly distinctive.

3 / 3

Total

12

/

12

Passed

Implementation

70%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a highly actionable and well-structured skill with excellent workflow clarity and progressive disclosure. Its major weakness is extreme verbosity — conversational tone, repeated summaries of the same core loop, unnecessary social commentary about user demographics, and casual asides ('Cool? Cool.', 'Sorry in advance but I'm gonna go all caps here') that consume significant tokens without adding instructional value. The content would be substantially more effective at perhaps 60% of its current length.

Suggestions

Remove conversational filler ('Cool? Cool.', 'Good luck!', the paragraph about plumbers and grandparents) and the repeated restatements of the core loop — state it once clearly at the top and reference it, rather than restating it three times.

Tighten the 'Communicating with the user' section to 2-3 sentences — the current explanation of JSON/assertion terminology thresholds is something Claude can infer from context without explicit instruction.

Consolidate the environment-specific sections (Claude.ai, Cowork) into a compact table or decision matrix rather than prose paragraphs that repeat what's already been said with 'skip this' annotations.

Remove the 'Principle of Lack of Surprise' section — Claude already knows not to create malware, and this adds ~50 words of zero instructional value.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~500+ lines with significant conversational padding ('Cool? Cool.'), unnecessary explanations of concepts Claude knows (what plumbers and grandparents are doing), repeated emphasis of the same points (the core loop is stated 3 times), and casual asides that waste tokens without adding actionable value.

1 / 3

Actionability

Despite the verbosity, the skill provides highly concrete, executable guidance: specific CLI commands, exact JSON schemas, file path conventions, step-by-step sequences with actual code blocks, and precise instructions for tools like generate_review.py and aggregate_benchmark. The commands are copy-paste ready.

3 / 3

Workflow Clarity

The multi-step workflow is clearly sequenced with explicit validation checkpoints: spawn runs → draft assertions while waiting → capture timing → grade → aggregate → launch viewer → read feedback → improve → repeat. Each step has clear inputs/outputs and the iteration loop includes explicit stopping criteria. Feedback loops are well-defined.

3 / 3

Progressive Disclosure

The skill effectively uses progressive disclosure with clear references to external files: agents/grader.md, agents/comparator.md, agents/analyzer.md, references/schemas.md, and assets/eval_review.html. References are one level deep, clearly signaled with context about when to read them, and the main SKILL.md serves as an orchestrating overview.

3 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
anthropics/claude-plugins-official
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.