Analyzes existing plugins to extract their capabilities, then adapts and applies those skills to the current task. Acts as a universal skill chameleon that learns from other plugins.
80
6%
Does it follow best practices?
Impact
94%
1.00xAverage score across 9 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./backups/skills-migration-20251108-070147/plugins/examples/pi-pathfinder/skills/pi-pathfinder/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is highly abstract and relies on buzzwords ('universal skill chameleon', 'learns from other plugins') without specifying any concrete actions, domains, or trigger conditions. It fails to communicate what specific tasks it performs or when Claude should select it, making it essentially unusable for skill selection among multiple options.
Suggestions
Replace abstract language with concrete actions — specify exactly what 'analyzing plugins' and 'adapting skills' means in practice (e.g., 'Reads plugin source code to extract API endpoints and configuration patterns').
Add an explicit 'Use when...' clause with natural trigger terms that describe the situations a user would encounter (e.g., 'Use when the user wants to replicate functionality from an existing plugin or integrate capabilities from multiple plugins').
Define a clear, narrow scope to reduce conflict risk — currently this could match almost any request. Specify the exact domain or file types involved.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, abstract language like 'analyzes existing plugins', 'extract their capabilities', and 'adapts and applies those skills'. No concrete actions are listed — 'skill chameleon' is a buzzword, not a capability. | 1 / 3 |
Completeness | The 'what' is vaguely described with abstract language, and there is no 'when' clause or explicit trigger guidance at all. Both dimensions are weak or missing. | 1 / 3 |
Trigger Term Quality | There are no natural keywords a user would say. Terms like 'universal skill chameleon' and 'learns from other plugins' are not things users would type. Missing any concrete trigger terms related to a real task domain. | 1 / 3 |
Distinctiveness Conflict Risk | The description is extremely generic — 'adapts and applies skills to the current task' could overlap with virtually any skill. There is no clear niche or distinct trigger to differentiate it. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
12%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is extremely verbose and abstract, describing a conceptual meta-process rather than providing actionable instructions. It reads more like a design document or README explaining what a hypothetical tool would do, rather than a skill that teaches Claude concrete steps. The examples are narrative walkthroughs of imaginary scenarios rather than executable guidance, and the entire skill assumes a plugin marketplace structure that may not exist.
Suggestions
Cut content by 60-70%: remove sections like Meta-Learning, Success Criteria, Transparency, the closing summary, and explanations of what plugins contain (Claude can read files). Focus only on the discovery commands and adaptation logic.
Replace narrative examples with concrete, executable workflows: instead of 'Find: owasp-top-10-scanner', show actual file reading commands and pattern extraction with real output formats.
Add validation checkpoints: what happens when no relevant plugins are found? When extracted patterns conflict? Include explicit fallback steps and error handling.
Split detailed examples into a separate EXAMPLES.md file and keep SKILL.md as a concise overview with the core discovery/extraction/adaptation workflow.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~180 lines. Explains concepts Claude already knows (what commands/agents/skills are, how to read files, basic reasoning processes). The 'Meta-Learning' section, 'Success Criteria', 'Transparency' section, and much of the 'Reasoning Process' are padding that don't add actionable value. The closing summary sentence is pure fluff. | 1 / 3 |
Actionability | Despite its length, the skill provides almost no concrete, executable guidance. The bash commands are generic (ls, grep). The examples are hypothetical narratives describing what would happen rather than providing actual code or commands to execute. References to plugins like 'owasp-top-10-scanner' appear to be fictional. There's nothing copy-paste ready. | 1 / 3 |
Workflow Clarity | The 5-step process (Task Analysis → Plugin Discovery → Capability Extraction → Pattern Synthesis → Skill Application) provides a clear sequence, and the examples walk through it. However, there are no validation checkpoints, no error handling steps, and no feedback loops for when plugin discovery fails or adaptation doesn't work. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. Everything is inline including three lengthy examples, detailed reasoning processes, limitations lists, and meta-learning concepts. Much of this content could be split into separate reference files or simply removed. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
Total | 10 / 11 Passed | |
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.