Deep explanation of complex code, files, or concepts. Routes to expert agents, uses structural search, generates mermaid diagrams. Triggers on: explain, deep dive, how does X work, architecture, data flow.
68
59%
Does it follow best practices?
Impact
75%
2.14xAverage score across 3 eval scenarios
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./data/skills-md/0xdarkmatter/claude-mods/explain/SKILL.mdQuality
Discovery
92%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description that clearly communicates capabilities and includes explicit trigger terms. The 'Triggers on:' clause is well-structured with natural user phrases. The main weakness is that some trigger terms like 'explain' are very broad and could cause conflicts with other skills in a large skill library.
Suggestions
Consider narrowing the 'explain' trigger by qualifying it (e.g., 'explain complex code' or 'explain codebase architecture') to reduce conflict risk with simpler explanation skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'deep explanation of complex code, files, or concepts', 'routes to expert agents', 'uses structural search', 'generates mermaid diagrams'. These are distinct, concrete capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (deep explanation, routing to expert agents, structural search, mermaid diagrams) and 'when' (explicit 'Triggers on:' clause with specific trigger terms). The trigger guidance is explicit and well-defined. | 3 / 3 |
Trigger Term Quality | Includes natural trigger terms users would actually say: 'explain', 'deep dive', 'how does X work', 'architecture', 'data flow'. These are realistic phrases users would use when seeking deep explanations. | 3 / 3 |
Distinctiveness Conflict Risk | While 'deep dive' and 'mermaid diagrams' are fairly distinctive, terms like 'explain' and 'how does X work' are quite broad and could overlap with general Q&A or documentation skills. 'Architecture' and 'data flow' help narrow the niche but there's still moderate conflict risk with other code explanation or documentation skills. | 2 / 3 |
Total | 11 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is significantly over-engineered and verbose for what it does. It includes extensive template content (mermaid diagrams, output format), routing tables, and CLI tool documentation that Claude already knows or could be split into reference files. The core workflow is reasonable but lacks validation steps and error handling, and the sheer volume of content undermines its utility as a quick-reference skill.
Suggestions
Reduce content by 60-70%: remove mermaid diagram templates (Claude knows these), trim the output format to just section headers, and eliminate explanations of what tools like tokei and bat do.
Split the expert routing table, output template, and depth/focus mode details into separate reference files (e.g., ROUTING.md, OUTPUT_FORMAT.md) and link to them from the main skill.
Add validation checkpoints: what to do when ast-grep finds no matches, how to verify the explanation covers the target adequately, and error recovery for missing tools.
Remove the 'Architecture' ASCII diagram — it duplicates the 'Execution Steps' section and wastes significant tokens.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~250+ lines. The ASCII architecture diagram, extensive routing tables, full mermaid diagram templates, and explanation output templates are all things Claude already knows how to produce. The skill explains concepts like mermaid diagram types, what tokei does, and how to check command availability — all unnecessary padding. | 1 / 3 |
Actionability | Contains concrete bash commands and tool invocations, but much of it is pseudocode-like (e.g., `$TARGET` variable usage without clear context of how it's populated). The routing table and expert agent invocation via 'Task tool with subagent_type' is vague about actual implementation. The output template is a skeleton rather than executable guidance. | 2 / 3 |
Workflow Clarity | The 5-step workflow is clearly sequenced with a nice ASCII diagram, but there are no validation checkpoints or error recovery steps. What happens if ast-grep finds nothing? What if the expert agent produces a poor explanation? No feedback loops for verification of output quality. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files. The full mermaid diagram templates, complete output format specification, all depth/focus mode details, and CLI tool tables are all inlined. Content like the output template, expert routing details, and mermaid examples should be in separate reference files. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
e437c3c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.