This skill analyzes code coverage metrics to identify untested code and generate comprehensive coverage reports. It is triggered when the user requests analysis of code coverage, identification of coverage gaps, or generation of coverage reports. The skill is best used to improve code quality by ensuring adequate test coverage and identifying areas for improvement. Use trigger terms like "analyze coverage", "code coverage report", "untested code", or the shortcut "cov".
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill analyzing-test-coverage68
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that excels across all dimensions. It provides specific capabilities, includes natural trigger terms users would actually say, explicitly addresses both what the skill does and when to use it, and carves out a distinct niche in code coverage analysis. The description uses proper third-person voice throughout.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'analyzes code coverage metrics', 'identify untested code', and 'generate comprehensive coverage reports'. These are clear, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both what ('analyzes code coverage metrics to identify untested code and generate comprehensive coverage reports') and when ('triggered when the user requests analysis of code coverage, identification of coverage gaps, or generation of coverage reports') with explicit trigger terms. | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'analyze coverage', 'code coverage report', 'untested code', and the shortcut 'cov'. These cover common variations of how users would request this functionality. | 3 / 3 |
Distinctiveness Conflict Risk | Has a clear niche focused specifically on code coverage analysis with distinct triggers like 'coverage gaps', 'untested code', and 'cov' shortcut. Unlikely to conflict with general testing or code analysis skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
20%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is overly descriptive and lacks actionable, executable guidance. It explains what coverage analysis is rather than providing concrete commands and code examples Claude can execute. The workflow is vague and missing validation steps that would be critical for running coverage tools across different project types.
Suggestions
Add concrete, executable commands for common coverage tools (e.g., 'npx nyc npm test', 'pytest --cov=src --cov-report=html')
Remove explanatory text about what coverage is and what the skill 'enables' - Claude knows these concepts
Add validation steps and error handling guidance (e.g., 'If coverage tool not found, check package.json for nyc or jest --coverage configuration')
Replace abstract examples with actual command sequences and expected output formats
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is verbose and explains concepts Claude already knows (what coverage tools are, what line/branch/function coverage means). Phrases like 'This skill enables Claude to...' and 'It helps you identify gaps' are unnecessary padding. | 1 / 3 |
Actionability | No executable code or concrete commands provided. The skill describes what it will do abstractly ('execute the project's test suite') but never shows actual commands like 'nyc npm test' or 'pytest --cov'. Examples describe outcomes rather than providing copy-paste ready instructions. | 1 / 3 |
Workflow Clarity | Steps are listed in a sequence (1. Collection, 2. Report Generation, 3. Identification), but there are no validation checkpoints, no error handling guidance, and no feedback loops for when coverage tools fail or produce unexpected results. | 2 / 3 |
Progressive Disclosure | Content is reasonably organized with clear sections, but everything is inline in one file. The 'Integration' section hints at connections to other tools but provides no actual references. For a skill of this length, the structure is acceptable but could benefit from linking to tool-specific guides. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.