This skill should be used when the user asks to "review installed skills", "find duplicates", "detect skill overlaps", "identify skill gaps", "optimize skills", "audit my skills", or "troubleshoot skill conflicts". Supports Gemini, Claude Code, Cursor, Copilot, Windsurf, and custom setups.
72
64%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skill-management/skills/skill-curator/SKILL.mdQuality
Discovery
72%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at trigger term coverage and distinctiveness, providing a clear niche for skill auditing and optimization across multiple AI platforms. However, it is structured almost entirely as a 'when to use' clause without a proper 'what it does' statement, making it unclear what concrete outputs or analyses the skill produces. Adding an explicit capability statement would significantly improve it.
Suggestions
Add an explicit 'what it does' statement before the trigger terms, e.g., 'Analyzes installed skill files to find duplicates, detect overlaps, identify coverage gaps, and recommend optimizations.'
Restructure to lead with concrete capabilities in third person (e.g., 'Audits and analyzes skill configurations across AI coding tools...') followed by the 'Use when...' clause.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names several actions like 'review installed skills', 'find duplicates', 'detect skill overlaps', 'identify skill gaps', 'optimize skills', 'audit my skills', and 'troubleshoot skill conflicts', but these are embedded within trigger phrases rather than stated as concrete capabilities the skill performs. It lacks a clear 'what it does' statement describing its actual actions. | 2 / 3 |
Completeness | The 'when' is very well covered with explicit trigger phrases. However, the 'what' is weak — the description never clearly states what the skill actually does (e.g., 'Analyzes installed skill files to find duplicates, detect overlaps, and identify gaps'). The capabilities are only implied through the trigger terms rather than explicitly described. | 2 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'review installed skills', 'find duplicates', 'detect skill overlaps', 'identify skill gaps', 'optimize skills', 'audit my skills', 'troubleshoot skill conflicts'. Also includes platform names (Gemini, Claude Code, Cursor, Copilot, Windsurf) which are useful trigger terms. | 3 / 3 |
Distinctiveness Conflict Risk | This skill occupies a very clear niche — meta-analysis of installed skills/plugins across AI coding tools. The specific trigger terms like 'skill overlaps', 'skill gaps', 'audit my skills', and 'skill conflicts' are highly distinctive and unlikely to conflict with other skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-organized skill with strong progressive disclosure and clear phased structure. Its main weaknesses are the lack of truly executable/actionable commands (most guidance is descriptive outlines rather than concrete code or commands) and the absence of validation checkpoints in the workflow. Some content like the full marketplace hierarchy table could be moved to reference files to improve conciseness.
Suggestions
Add concrete, executable commands for the discovery phase (e.g., actual shell commands like `find .agent/skills -name 'SKILL.md'` or `cat .claude/settings.json | jq '.plugins'`) instead of descriptive text blocks.
Add explicit validation checkpoints in the workflow, such as 'Present discovered inventory to user for confirmation before proceeding to analysis' and error handling for when no skills are found or platform detection is ambiguous.
Move the Marketplace Preference Hierarchy table and rules to the marketplace-reference.md file, keeping only a brief summary and link in the main skill to improve token efficiency.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably well-structured but includes some unnecessary verbosity, such as the full platform support table and marketplace hierarchy details that could be in reference files. The operational rules section repeats guidance already implied by the workflow. However, it avoids explaining basic concepts Claude already knows. | 2 / 3 |
Actionability | The skill provides structured steps and detection logic, but most guidance is descriptive rather than executable. There are no actual commands or code snippets to run—the 'code blocks' are pseudocode/text outlines (e.g., the platform detection order is plain text, not executable). The inventory table is a template but lacks concrete implementation details. | 2 / 3 |
Workflow Clarity | The phased workflow (Phase 0 → Phase 1 → Phase 2 → Analysis Phases) is clearly sequenced and logical. However, there are no explicit validation checkpoints or error recovery steps. For an auditing/analysis workflow that could produce incorrect recommendations, there's no 'verify with user before proceeding' or feedback loop for when discovery fails or produces unexpected results. | 2 / 3 |
Progressive Disclosure | Excellent use of progressive disclosure with a clear overview in the main file and well-signaled one-level-deep references to four specific reference files (analysis-methodology.md, report-template.md, marketplace-reference.md, portfolio-templates.md). The main skill contains enough context to understand each area without needing to read the references. | 3 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
cc0ada7
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.