CtrlK
BlogDocsLog inGet started
Tessl Logo

skill-creator

Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.

84

Quality

81%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly articulates what the skill does (create, modify, evaluate, and optimize skills) and when to use it (with an explicit 'Use when...' clause covering multiple trigger scenarios). The language is specific, uses third person voice correctly, and targets a distinct niche that is unlikely to conflict with other skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Create new skills', 'modify and improve existing skills', 'measure skill performance', 'run evals', 'benchmark skill performance with variance analysis', 'optimize a skill's description for better triggering accuracy'.

3 / 3

Completeness

Clearly answers both 'what' (create, modify, improve, measure skills) and 'when' with an explicit 'Use when...' clause listing specific trigger scenarios like creating from scratch, editing, running evals, benchmarking, and optimizing descriptions.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'create a skill', 'edit', 'optimize', 'evals', 'benchmark', 'skill performance', 'triggering accuracy', 'description'. These cover a good range of terms a user working with skills would naturally use.

3 / 3

Distinctiveness Conflict Risk

The description targets a very specific meta-domain — skill creation, editing, evaluation, and optimization — which is a clear niche unlikely to conflict with other skills. Terms like 'skill', 'evals', 'triggering accuracy', and 'variance analysis' are highly distinctive.

3 / 3

Total

12

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a highly actionable and well-structured skill with excellent workflow clarity, providing concrete commands, JSON schemas, and clear step-by-step processes for creating, testing, and iterating on skills. Its main weakness is extreme verbosity — the content is roughly 3-4x longer than necessary, with repeated core loops, conversational filler, and inline content that should be in reference files. The progressive disclosure structure references external files appropriately but fails to offload enough of its own bulk.

Suggestions

Cut the content significantly: remove the 3 restatements of the core loop, conversational asides ('Cool? Cool.', 'Sorry in advance but'), and explanations of concepts Claude already knows. Target under 300 lines for the main body.

Move environment-specific sections (Claude.ai instructions, Cowork instructions) into separate reference files (e.g., references/claude-ai.md, references/cowork.md) and reference them with one-line pointers from SKILL.md.

Move the Description Optimization section (~100 lines) into its own reference file since it's a distinct workflow that only runs after the main skill creation loop is complete.

Remove the 'Communicating with the user' section — Claude already understands audience adaptation and this adds ~150 words of guidance that doesn't change behavior meaningfully.

DimensionReasoningScore

Conciseness

The skill is extremely verbose (~500+ lines) with significant padding, conversational asides ('Cool? Cool.'), repeated instructions (the core loop is stated 3 times), explanations of concepts Claude knows (what JSON is, how subagents work), and lengthy sections on communication style and user empathy that don't add actionable value. Much content could be cut without losing clarity.

1 / 3

Actionability

The skill provides highly concrete, executable guidance throughout: specific CLI commands (python -m scripts.aggregate_benchmark), exact JSON schemas for eval_metadata.json/evals.json/feedback.json/timing.json, specific file paths and directory structures, and copy-paste ready bash commands for launching the viewer, running optimization loops, and packaging skills.

3 / 3

Workflow Clarity

The multi-step workflow is clearly sequenced with explicit numbered steps (Step 1 through Step 5), validation checkpoints (grade runs, aggregate benchmarks, analyst pass before showing to user), feedback loops (iterate until user is happy or feedback is empty), and clear branching for different environments (Claude.ai, Cowork, Claude Code). Destructive operations aren't present, and the review-before-revise pattern is emphasized repeatedly.

3 / 3

Progressive Disclosure

The skill references external files well (agents/grader.md, agents/comparator.md, agents/analyzer.md, references/schemas.md) with clear guidance on when to read them. However, the SKILL.md body itself is monolithic and contains substantial inline content that could be split into reference files — the description optimization section, Claude.ai-specific instructions, and Cowork-specific instructions could each be separate files to keep the main body leaner.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (511 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
coinbase/cds
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.