This skill should be used when the user asks to "review installed skills", "find duplicates", "detect skill overlaps", "identify skill gaps", "optimize skills", "audit my skills", or "troubleshoot skill conflicts". Supports Gemini, Claude Code, Cursor, Copilot, Windsurf, and custom setups.
72
64%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skill-management/skills/skill-curator/SKILL.mdQuality
Discovery
72%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at trigger term coverage and distinctiveness, providing many natural phrases users would say and targeting a clear niche. Its main weakness is the lack of explicit capability description — it tells Claude when to use the skill but not what the skill concretely does or produces. Adding a clear 'what it does' statement would significantly improve it.
Suggestions
Add an explicit 'what' clause at the beginning describing concrete actions, e.g., 'Scans and analyzes installed AI coding skills to find duplicates, detect overlaps, identify coverage gaps, and recommend optimizations.'
Describe the outputs or deliverables the skill produces, e.g., 'Generates a report listing duplicate skills, overlapping triggers, and suggested consolidations.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (skill management/auditing) and lists some actions like 'review installed skills', 'find duplicates', 'detect skill overlaps', 'identify skill gaps', 'optimize skills', 'audit my skills', 'troubleshoot skill conflicts'. However, these are presented as trigger phrases rather than concrete capability descriptions of what the skill actually does. It mentions platform support but doesn't describe the concrete outputs or actions performed. | 2 / 3 |
Completeness | The 'when' is very well covered with explicit trigger phrases. However, the 'what does this do' is weak — it never clearly states what the skill actually does or produces. The trigger phrases imply capabilities but don't explicitly describe the actions the skill performs (e.g., 'Scans installed skills to find duplicates, detect overlaps, and identify gaps'). The description reads more like a 'when' clause without a proper 'what' clause. | 2 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would actually say: 'review installed skills', 'find duplicates', 'detect skill overlaps', 'identify skill gaps', 'optimize skills', 'audit my skills', 'troubleshoot skill conflicts'. These are natural phrases a user would type. Also includes platform names (Gemini, Claude Code, Cursor, Copilot, Windsurf) which are useful triggers. | 3 / 3 |
Distinctiveness Conflict Risk | This is a very distinct niche — skill auditing, duplicate detection, and overlap analysis across AI coding tools. It's unlikely to conflict with other skills since it's a meta-skill about managing other skills, and the specific platform names and action terms create a clear, unique identity. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-organized skill with strong progressive disclosure and clear phased structure. Its main weaknesses are the lack of truly executable/concrete guidance (most steps are descriptive outlines rather than specific commands) and the absence of validation checkpoints or error handling in the workflow. Some content like the full marketplace hierarchy table could be moved to reference files to improve conciseness.
Suggestions
Add concrete, executable examples for key steps—e.g., actual shell commands for scanning directories (`find .agent/skills -name 'SKILL.md'`), actual code for parsing frontmatter, or actual commands for fetching marketplace.json (`curl -s https://...`).
Add validation checkpoints and error handling: what to do if no skills are found, if a platform can't be detected, if marketplace.json is unreachable, or if scoring produces ties.
Move the Marketplace Preference Hierarchy table and detailed rules to `references/marketplace-reference.md` (which already exists) and keep only a brief summary in the main file to reduce token usage.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably well-structured but includes some unnecessary verbosity, such as the full platform support table and marketplace hierarchy details that could be in reference files. The operational rules section repeats guidance already implied by the workflow. However, it avoids explaining basic concepts Claude already knows. | 2 / 3 |
Actionability | The skill provides structured steps and detection logic, but most guidance is descriptive rather than executable. There are no actual commands or code snippets to run—the 'code blocks' are pseudocode/text outlines (e.g., the platform detection order). The scoring formula mentions weights but doesn't show how to compute them. The example workflow is helpful but still high-level. | 2 / 3 |
Workflow Clarity | The phased structure (Phase 0 → Phase 1 → Phase 2 → Analysis Phases) provides clear sequencing, and the Quick vs Full mode branching is well-defined. However, there are no explicit validation checkpoints or error recovery steps—e.g., what happens if discovery finds no skills, if marketplace.json fetch fails, or if scoring produces ambiguous results. | 2 / 3 |
Progressive Disclosure | Excellent use of progressive disclosure with a clear overview in the main file and well-signaled one-level-deep references to four specific reference files (analysis-methodology.md, report-template.md, marketplace-reference.md, portfolio-templates.md). Content is appropriately split between the overview and detailed references. | 3 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
b573078
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.