CtrlK
BlogDocsLog inGet started
Tessl Logo

skill-curator

This skill should be used when the user asks to "review installed skills", "find duplicates", "detect skill overlaps", "identify skill gaps", "optimize skills", "audit my skills", or "troubleshoot skill conflicts". Supports Gemini, Claude Code, Cursor, Copilot, Windsurf, and custom setups.

Install with Tessl CLI

npx tessl i github:back1ply/LLM-Skills --skill skill-curator
What are skills?

Overall
score

77%

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

72%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description excels at trigger term coverage and distinctiveness, making it easy for Claude to know when to select it. However, it's structured backwards - it leads with 'when' triggers but lacks a clear 'what' statement explaining the concrete actions and outputs the skill provides. The description reads more like a list of activation phrases than a capability summary.

Suggestions

Add a leading capability statement describing what the skill does, e.g., 'Analyzes installed AI coding assistant skills to identify duplicates, detect overlaps, find coverage gaps, and resolve conflicts.'

Describe concrete outputs the skill produces, such as 'generates audit reports', 'produces deduplication recommendations', or 'creates skill inventory summaries'.

DimensionReasoningScore

Specificity

Names the domain (skill management) and lists actions like 'review', 'find duplicates', 'detect overlaps', 'identify gaps', 'optimize', 'audit', 'troubleshoot conflicts', but these are mostly trigger phrases rather than concrete capability descriptions of what the skill actually does.

2 / 3

Completeness

Strong on 'when' with explicit trigger phrases, but weak on 'what' - it doesn't clearly explain what the skill actually does beyond listing trigger scenarios. The platform list (Gemini, Claude Code, etc.) hints at scope but doesn't describe concrete outputs or capabilities.

2 / 3

Trigger Term Quality

Excellent coverage of natural user phrases: 'review installed skills', 'find duplicates', 'detect skill overlaps', 'identify skill gaps', 'optimize skills', 'audit my skills', 'troubleshoot skill conflicts'. These are terms users would naturally say when needing this functionality.

3 / 3

Distinctiveness Conflict Risk

Very distinct niche focused specifically on skill/plugin management and auditing. The specific trigger terms like 'skill overlaps', 'skill gaps', 'skill conflicts' are unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

70%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill demonstrates strong organizational structure with clear phased workflows and excellent progressive disclosure through well-referenced supporting files. However, it lacks concrete executable code/commands for the discovery and analysis steps, relying instead on descriptive instructions. Some sections could be tightened to improve token efficiency.

Suggestions

Add executable code snippets for platform detection (e.g., actual shell commands or Python code to scan directories and parse SKILL.md frontmatter)

Replace descriptive discovery instructions with concrete commands like `find .agent/skills -name 'SKILL.md'` or equivalent programmatic approaches

Condense the platform support table and marketplace hierarchy into more compact formats, moving detailed listings entirely to reference files

DimensionReasoningScore

Conciseness

The skill is reasonably efficient but includes some unnecessary verbosity like the full platform table and repeated explanations. The marketplace hierarchy section could be more condensed, and some operational rules restate obvious points.

2 / 3

Actionability

Provides structured guidance with tables and phases, but lacks executable code examples. The 'Execute discovery based on detected platform' section describes what to do abstractly rather than providing concrete commands or scripts Claude could run.

2 / 3

Workflow Clarity

Excellent multi-phase workflow with clear sequencing (Phase 0 → Phase 1 → Phase 2 → Analysis Phases). Each phase has explicit steps, and the operational rules provide clear decision points. The Quick vs Full mode branching is well-defined.

3 / 3

Progressive Disclosure

Exemplary structure with a clear overview in the main file and well-signaled one-level-deep references to methodology, templates, marketplace reference, and portfolio templates. Navigation is intuitive and content is appropriately split.

3 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.