Coverage Report Analyzer - Auto-activating skill for Test Automation. Triggers on: coverage report analyzer, coverage report analyzer Part of the Test Automation skill category.
35
3%
Does it follow best practices?
Impact
92%
0.98xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/09-test-automation/coverage-report-analyzer/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a placeholder that repeats the skill name without providing any substantive information about what the skill does or when it should be used. It lacks concrete actions, meaningful trigger terms, and explicit usage guidance, making it nearly useless for skill selection among multiple options.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Parses coverage reports (lcov, cobertura, Istanbul), identifies uncovered code paths, summarizes coverage percentages by file/module, and highlights coverage regressions.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about code coverage, test coverage reports, uncovered lines, coverage percentages, .lcov files, or coverage gaps.'
Remove the duplicate trigger term and replace with diverse natural language variations users might actually say, such as 'coverage report', 'test coverage', 'code coverage', 'coverage summary', 'uncovered code'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names a domain ('Coverage Report Analyzer') but describes no concrete actions. There are no specific capabilities listed such as parsing coverage files, identifying uncovered lines, generating summaries, etc. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond the name, and there is no explicit 'when should Claude use it' clause. The 'Triggers on' line just repeats the skill name rather than providing meaningful trigger guidance. | 1 / 3 |
Trigger Term Quality | The trigger terms are just 'coverage report analyzer' repeated twice. It misses natural user phrases like 'code coverage', 'test coverage', 'uncovered lines', 'coverage percentage', '.lcov', 'coverage gaps', etc. | 1 / 3 |
Distinctiveness Conflict Risk | The term 'Coverage Report Analyzer' is somewhat specific to a niche (test coverage analysis), which provides some distinctiveness. However, the lack of concrete actions or file types means it could overlap with general test automation or reporting skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an empty shell with no actual instructional content. It consists entirely of meta-descriptions and placeholder text that describe what the skill would do without providing any concrete guidance, code, commands, or workflows for analyzing coverage reports. It fails on every dimension of the rubric.
Suggestions
Add concrete, executable examples showing how to generate and parse coverage reports (e.g., `pytest --cov=mypackage --cov-report=json`, then parsing the JSON output with Python code).
Define a clear multi-step workflow: generate report → parse results → identify uncovered lines → suggest improvements, with validation at each step.
Remove all meta-description sections ('Purpose', 'When to Use', 'Capabilities', 'Example Triggers') and replace with actionable content like code snippets for common coverage tools (Jest, pytest-cov, Istanbul).
Add specific examples of coverage report formats (lcov, Cobertura XML, JSON) with sample parsing code and thresholds for pass/fail decisions.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is entirely filler and meta-description. It explains what the skill does in abstract terms without providing any actual technical content. Every section restates the same vague idea ('coverage report analyzer') without adding substance. | 1 / 3 |
Actionability | There is zero concrete guidance—no code, no commands, no specific steps, no examples of how to actually analyze a coverage report. The 'Example Triggers' section just lists ways to invoke the skill, not actionable instructions. | 1 / 3 |
Workflow Clarity | No workflow is defined at all. There are no steps, no sequence, no validation checkpoints. The skill claims to provide 'step-by-step guidance' but contains none. | 1 / 3 |
Progressive Disclosure | The content is a flat, monolithic block of vague descriptions with no references to detailed files, no structured navigation, and no meaningful content organization beyond boilerplate headings. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
87f14eb
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.