CtrlK
BlogDocsLog inGet started
Tessl Logo

skill-finder

This skill should be used when the user asks to "find a skill", "discover plugins", "search for an MCP", "what plugins exist for X", "fill my skill gaps", "improve my setup", or when Claude recognizes it lacks tools for a task. Searches GitHub and marketplaces to suggest installations.

67

Quality

58%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skill-management/skills/skill-finder/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description excels at trigger term coverage and completeness, providing numerous natural phrases users would say and clearly stating both what the skill does and when to use it. The main weakness is that the 'what it does' portion is somewhat thin — 'Searches GitHub and marketplaces to suggest installations' could be more specific about the concrete actions performed (e.g., comparing options, checking compatibility, providing install commands).

Suggestions

Expand the capability description to list more specific actions, e.g., 'Searches GitHub repositories and plugin marketplaces, compares options, checks compatibility, and provides installation instructions.'

DimensionReasoningScore

Specificity

The description mentions 'Searches GitHub and marketplaces to suggest installations' which names the domain and a couple of actions, but doesn't list specific concrete actions like 'browse repositories', 'compare plugins', 'install skills', or 'analyze compatibility'.

2 / 3

Completeness

Clearly answers both 'what' (searches GitHub and marketplaces to suggest installations) and 'when' (explicit trigger phrases listed, plus the meta-trigger of Claude recognizing it lacks tools for a task). The 'Use when' equivalent is front-loaded with specific trigger scenarios.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms: 'find a skill', 'discover plugins', 'search for an MCP', 'what plugins exist for X', 'fill my skill gaps', 'improve my setup'. These are phrases users would naturally say when looking for this functionality.

3 / 3

Distinctiveness Conflict Risk

This skill occupies a clear niche — skill/plugin discovery and installation suggestions. The trigger terms like 'find a skill', 'discover plugins', 'search for an MCP' are highly specific and unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is significantly over-engineered and verbose for what it does. The core workflow (detect gap → search → present results → offer install) is sound but is buried under repetitive tables, redundant search query templates stated multiple times, and explanations of concepts Claude already understands. The content would benefit enormously from being cut to ~30% of its current size and splitting reference tables into separate files.

Suggestions

Cut content by 60-70%: merge the redundant search query sections (they appear in 'Formulate Search Strategy', 'Search Query Templates', and 'Search Execution Protocol'), remove the quality/warning signals tables (Claude knows what GitHub stars mean), and eliminate the common gap categories table (Claude can formulate search terms).

Split reference material into separate files: move 'Common Gap Categories & Search Terms' to a SEARCH_TERMS.md, move 'Result Presentation Format' templates to TEMPLATES.md, and keep SKILL.md as a concise overview with the core workflow.

Add explicit failure/feedback loops: what happens when no plugins are found, when installation fails, or when a plugin doesn't work as expected. Currently the workflow assumes happy-path throughout.

Remove the 'Proactive Gap Detection Examples' section or reduce to one example — these are lengthy dialogue scripts that demonstrate obvious behavior Claude can infer from the shorter trigger table.

DimensionReasoningScore

Conciseness

Extremely verbose at ~350+ lines. Contains massive amounts of redundant tables, repeated search query templates across multiple sections, and information Claude already knows (how to search GitHub, what stars mean, how npm works). The same search patterns are restated 3-4 times in different formats. The quality signals tables explain obvious concepts like 'Popular = >50 stars'.

1 / 3

Actionability

Provides concrete search query templates and install command patterns, but much of the guidance is templated placeholders rather than truly executable commands. The search queries are text templates, not actual tool invocations. The install commands use bracket placeholders. However, the MCP JSON config examples and marketplace commands are reasonably concrete.

2 / 3

Workflow Clarity

Has a multi-step workflow (analyze gaps → search → evaluate → present), but validation checkpoints are weak. Step 3 'Verify Compatibility' lists checks but doesn't specify what to do on failure. There's no feedback loop for when searches return no results or when installations fail. The post-discovery actions mention verification but lack concrete validation steps.

2 / 3

Progressive Disclosure

Monolithic wall of content with no references to external files. The MCP discovery section, common gap categories tables, search query templates, and result presentation formats could all be split into separate reference files. Everything is inlined into one massive document, making it expensive to load for every invocation.

1 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
back1ply/LLM-Skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.