This skill analyzes code coverage metrics to identify untested code and generate comprehensive coverage reports. It is triggered when the user requests analysis of code coverage, identification of coverage gaps, or generation of coverage reports. The skill is best used to improve code quality by ensuring adequate test coverage and identifying areas for improvement. Use trigger terms like "analyze coverage", "code coverage report", "untested code", or the shortcut "cov".
86
48%
Does it follow best practices?
Impact
90%
1.01xAverage score across 12 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./backups/skills-migration-20251108-070147/plugins/testing/test-coverage-analyzer/skills/test-coverage-analyzer/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description that clearly communicates its purpose and provides explicit trigger guidance. Its main weakness is that the capability descriptions could be more concrete—listing specific operations rather than high-level actions. The trigger terms and 'when to use' guidance are well-crafted and would help Claude select this skill appropriately.
Suggestions
Add more specific concrete actions such as 'parse lcov/istanbul output, highlight uncovered branches, compare coverage percentages across commits' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (code coverage) and some actions (analyzes metrics, identifies untested code, generates reports), but the actions are somewhat generic and not as concrete as listing specific operations like 'parse lcov files, highlight uncovered branches, compare coverage between commits'. | 2 / 3 |
Completeness | Clearly answers both 'what' (analyzes code coverage metrics, identifies untested code, generates coverage reports) and 'when' (explicitly states trigger conditions and includes a 'Use trigger terms like...' clause with specific examples). | 3 / 3 |
Trigger Term Quality | Includes good natural trigger terms: 'analyze coverage', 'code coverage report', 'untested code', and the shortcut 'cov'. These are terms users would naturally use when requesting this functionality. | 3 / 3 |
Distinctiveness Conflict Risk | Code coverage analysis is a clear niche distinct from general testing, code review, or static analysis skills. The specific trigger terms like 'coverage gaps', 'coverage report', and 'cov' shortcut make it unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a marketing description or README overview than an actionable skill file. It lacks any concrete commands, executable code, or specific tool configurations that Claude could actually use. The content explains concepts Claude already understands while failing to provide the specific, copy-paste-ready instructions needed for a functional skill.
Suggestions
Replace abstract descriptions with concrete, executable commands for each supported coverage tool (e.g., `nyc --reporter=lcov npm test`, `python -m pytest --cov=src --cov-report=html`)
Add actual code examples showing how to parse coverage output, identify uncovered lines, and format a coverage report with specific output formats
Remove the 'How It Works', 'When to Use', 'Best Practices', and 'Integration' sections — they explain things Claude already knows and add no actionable value
Add a workflow with validation steps: run tests → check exit code → parse coverage data → compare against thresholds → report gaps, with error handling for common failures like missing config or test failures
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is verbose and explains concepts Claude already knows (what coverage tools are, what line/branch/function coverage means, what CI/CD integration is). The 'How It Works', 'When to Use', 'Best Practices', and 'Integration' sections are largely filler that don't provide actionable new information. | 1 / 3 |
Actionability | There are no concrete commands, executable code snippets, or specific tool configurations. The examples describe what the skill 'will do' in abstract terms rather than providing actual commands like `nyc --reporter=lcov npm test` or `coverage run -m pytest`. Everything is vague description rather than instruction. | 1 / 3 |
Workflow Clarity | The steps listed are abstract ('execute the project's test suite with coverage tracking') with no actual commands, no validation checkpoints, and no error recovery. There's no guidance on what to do when coverage tools aren't configured or when tests fail. | 1 / 3 |
Progressive Disclosure | The content is organized into logical sections with headers, which provides some structure. However, there are no references to external files, and content that could be split out (e.g., tool-specific configurations for nyc vs coverage.py vs JaCoCo) is neither inline nor referenced. The structure is reasonable but the content within it is weak. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
13d35b8
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.