Analyzes test coverage and generates missing tests to achieve 80%+ coverage
47
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/test-coverage/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description communicates its core purpose—analyzing test coverage and generating missing tests—but lacks a 'Use when...' clause, which is a significant gap for skill selection. It would benefit from more specific action verbs, natural trigger terms users might say, and explicit guidance on when Claude should select this skill.
Suggestions
Add a 'Use when...' clause with trigger terms like 'test coverage', 'missing tests', 'increase coverage', 'uncovered code', 'coverage report', or 'coverage threshold'.
Include more specific concrete actions such as 'identifies uncovered branches and functions, generates unit tests for untested code paths, and produces coverage reports'.
Add natural keyword variations users might say, such as 'unit tests', 'code coverage', 'pytest', 'jest', '.coverage', or 'coverage percentage'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (test coverage) and two actions (analyzes test coverage, generates missing tests), but doesn't list more specific concrete actions like identifying uncovered branches, creating unit/integration tests, or specifying frameworks. | 2 / 3 |
Completeness | Describes what it does (analyzes coverage, generates tests) but has no explicit 'Use when...' clause or equivalent trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'when' is entirely absent, this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'test coverage' and 'missing tests', but misses common user variations such as 'unit tests', 'code coverage', 'uncovered code', 'coverage report', or specific tool names like 'jest', 'pytest', 'coverage threshold'. | 2 / 3 |
Distinctiveness Conflict Risk | The focus on test coverage and the 80%+ target provides some specificity, but it could overlap with general test-writing skills or code quality skills. The coverage percentage adds some distinction but the scope is still somewhat broad. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a reasonable high-level workflow for test coverage analysis but lacks the concrete, actionable guidance needed to be truly useful. There are no code examples showing what generated tests should look like, no sample coverage report parsing, and no specific patterns for different test types. The workflow steps are logical but too abstract to differentiate this from what Claude would do by default.
Suggestions
Add concrete code examples showing how to parse coverage-summary.json and identify under-covered files programmatically
Include a template or example of a generated test for each type (unit, integration, E2E) so Claude knows the expected output format
Add an explicit feedback loop: if new tests fail, diagnose the failure, fix the test, re-run, and only proceed when all tests pass
Specify the exact coverage tool output format expected and how to calculate before/after metrics rather than just saying 'show before/after coverage metrics'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Reasonably concise and doesn't over-explain concepts Claude knows, but the 'Focus on' section is somewhat generic advice Claude already understands. Could be tighter overall. | 2 / 3 |
Actionability | No executable code examples, no concrete test templates, no specific commands beyond the initial npm/pnpm invocation. Steps like 'Generate unit tests for functions' are vague directions rather than actionable guidance with examples of what generated tests should look like. | 1 / 3 |
Workflow Clarity | Steps are listed in a logical sequence and include a verification step (step 5), but there's no explicit validation/feedback loop for when tests fail, no guidance on how to handle coverage regressions, and the 'verify new tests pass' step lacks detail on what to do if they don't. | 2 / 3 |
Progressive Disclosure | The content is short enough that it doesn't need external references, but it's a flat list without clear section headers or structural organization. For a skill that covers multiple test types (unit, integration, E2E), it could benefit from better structure or references to examples. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
7aff694
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.