CtrlK
BlogDocsLog inGet started
Tessl Logo

generating-test-reports

This skill generates comprehensive test reports with coverage metrics, trends, and stakeholder-friendly formats (HTML, PDF, JSON). It aggregates test results from various frameworks, calculates key metrics (coverage, pass rate, duration), and performs trend analysis. Use this skill when the user requests a test report, coverage analysis, failure analysis, or historical comparisons of test runs. Trigger terms include "test report", "coverage report", "testing trends", "failure analysis", and "historical test data".

93

1.00x
Quality

53%

Does it follow best practices?

Impact

97%

1.00x

Average score across 15 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./backups/skills-migration-20251108-070147/plugins/testing/test-report-generator/skills/test-report-generator/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly articulates specific capabilities (report generation, metric calculation, trend analysis), provides explicit trigger guidance with natural user terms, and occupies a distinct niche. It uses proper third-person voice throughout and balances detail with conciseness effectively.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: generates test reports, aggregates results from frameworks, calculates key metrics (coverage, pass rate, duration), performs trend analysis, and supports multiple output formats (HTML, PDF, JSON).

3 / 3

Completeness

Clearly answers both 'what' (generates test reports with coverage metrics, aggregates results, calculates metrics, performs trend analysis) and 'when' (explicit 'Use this skill when...' clause with specific trigger scenarios and terms).

3 / 3

Trigger Term Quality

Explicitly lists natural trigger terms users would say: 'test report', 'coverage report', 'testing trends', 'failure analysis', 'historical test data'. Also includes related terms like 'coverage analysis' and 'historical comparisons' in the use-when clause.

3 / 3

Distinctiveness Conflict Risk

Occupies a clear niche around test reporting, coverage metrics, and trend analysis. The specific focus on report generation with output formats and historical comparisons makes it unlikely to conflict with general testing or code quality skills.

3 / 3

Total

12

/

12

Passed

Implementation

7%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill content is essentially a high-level description of what a test reporting skill would do, rather than actionable instructions for how to do it. It lacks any concrete code, commands, templates, or specific implementation details. The content reads more like a product description or README than a skill that Claude could execute.

Suggestions

Add executable code examples showing how to actually parse test results from specific frameworks (e.g., pytest JSON output, JUnit XML) and compute metrics like coverage percentage and pass rate.

Include a concrete report template (HTML snippet, JSON schema) that Claude can populate with actual test data, rather than abstractly describing what a report contains.

Replace the abstract 'How It Works' steps with a concrete workflow including specific commands (e.g., 'Run `pytest --json-report` to generate input data') and validation checkpoints.

Remove the 'Overview', 'When to Use', and 'Best Practices' sections which explain obvious concepts, and replace them with actionable content like error handling patterns or format-specific generation logic.

DimensionReasoningScore

Conciseness

The content is verbose and explains concepts Claude already knows (what aggregating results means, what code coverage is, when to use the skill). The 'Overview', 'How It Works', 'When to Use', and 'Best Practices' sections are largely redundant filler that add no actionable information Claude couldn't infer.

1 / 3

Actionability

There is no concrete code, no executable commands, no specific tool invocations, no actual report templates, and no real implementation guidance. The examples describe what the skill 'will do' in abstract terms rather than showing how to actually do it. There's nothing copy-paste ready or executable.

1 / 3

Workflow Clarity

The numbered steps in 'How It Works' and the examples are vague descriptions ('aggregate test results', 'calculate code coverage') with no specific commands, validation checkpoints, or error handling. There is no concrete workflow Claude could follow to actually produce a report.

1 / 3

Progressive Disclosure

The content is organized into logical sections with headers, which provides some structure. However, there are no references to supporting files, no bundle files exist, and the content that is present is shallow rather than appropriately split between overview and detail.

2 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.