CtrlK
BlogDocsLog inGet started
Tessl Logo

generating-test-reports

This skill generates comprehensive test reports with coverage metrics, trends, and stakeholder-friendly formats (HTML, PDF, JSON). It aggregates test results from various frameworks, calculates key metrics (coverage, pass rate, duration), and performs trend analysis. Use this skill when the user requests a test report, coverage analysis, failure analysis, or historical comparisons of test runs. Trigger terms include "test report", "coverage report", "testing trends", "failure analysis", and "historical test data".

Install with Tessl CLI

npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill generating-test-reports
What are skills?

93

1.00x

Quality

53%

Does it follow best practices?

Impact

97%

1.00x

Average score across 15 eval scenarios

Optimize this skill with Tessl

npx tessl skill review --optimize ./backups/skills-migration-20251108-070147/plugins/testing/test-report-generator/skills/test-report-generator/SKILL.md
SKILL.md
Review
Evals

Evaluation results

100%

Shareable Test Report for Engineering Leadership

HTML report with aggregated metrics and failure analysis

Criteria
Without context
With context

HTML output format

100%

100%

Coverage percentage

100%

100%

Pass rate

100%

100%

Test duration

100%

100%

Failing tests listed

100%

100%

Failure messages included

100%

100%

Multi-suite aggregation

100%

100%

Skipped tests reported

100%

100%

Errors vs failures distinction

100%

100%

Summary section

100%

100%

Without context: $0.3877 · 1m 52s · 12 turns · 19 in / 8,099 out tokens

With context: $0.7767 · 3m 26s · 28 turns · 229 in / 14,109 out tokens

98%

2%

CI/CD Build Quality Comparison

Cross-build trend analysis and regression detection

Criteria
Without context
With context

Both builds compared

100%

100%

Pass rate trend

60%

80%

Coverage trend

100%

100%

Duration trend

100%

100%

Regressions highlighted

100%

100%

Improvements highlighted

100%

100%

New failures identified

100%

100%

Fixed tests identified

100%

100%

Side-by-side metric comparison

100%

100%

Without context: $0.3264 · 1m 43s · 12 turns · 17 in / 5,259 out tokens

With context: $0.6224 · 3m 6s · 26 turns · 521 in / 9,982 out tokens

100%

Automated Test Monitoring for API Health Dashboard

Scoped JSON report with failure details

Criteria
Without context
With context

JSON output format

100%

100%

Scoped to API tests

100%

100%

Coverage metric included

100%

100%

Pass rate included

100%

100%

Duration included

100%

100%

Failing test names

100%

100%

Failure messages included

100%

100%

Structured JSON hierarchy

100%

100%

Skipped test count

100%

100%

Without context: $0.1868 · 45s · 8 turns · 11 in / 2,587 out tokens

With context: $0.4114 · 1m 57s · 22 turns · 62 in / 4,950 out tokens

78%

Quarterly Engineering Review: Test Health Report

PDF format report generation

Criteria
Without context
With context

PDF output file

0%

0%

Coverage percentage included

100%

100%

Pass rate included

80%

80%

Test duration included

100%

100%

Failing tests named

100%

100%

Failure messages included

100%

100%

Error distinguished from failure

100%

100%

Multi-suite aggregation

100%

100%

Project context included

100%

100%

Skipped test count

100%

100%

Without context: $0.3732 · 2m 2s · 9 turns · 14 in / 8,560 out tokens

With context: $0.5738 · 2m 48s · 16 turns · 307 in / 12,096 out tokens

100%

Sprint Release Quality Gate: Test Health Trend Report

Multi-build historical trend analysis

Criteria
Without context
With context

All 5 builds included

100%

100%

Pass rate trend shown

100%

100%

Coverage trend shown

100%

100%

Duration trend shown

100%

100%

Regressions identified

100%

100%

Improvements identified

100%

100%

Persistent failures highlighted

100%

100%

Comparative presentation

100%

100%

Time period scoped

100%

100%

Summary conclusion

100%

100%

Without context: $0.3150 · 1m 47s · 14 turns · 18 in / 5,157 out tokens

With context: $0.4090 · 2m 20s · 20 turns · 515 in / 6,582 out tokens

100%

Automated Test Reporting for CI/CD Pipeline

CI/CD pipeline integration for automated test reporting

Criteria
Without context
With context

GitHub Actions workflow file

100%

100%

Triggered on build events

100%

100%

Test execution step included

100%

100%

Report generation step included

100%

100%

Report output format specified

100%

100%

Coverage metrics collected

100%

100%

Report artifact uploaded or shared

100%

100%

JUnit XML output enabled

100%

100%

Report generation script produced

100%

100%

Project context in report

100%

100%

Without context: $0.4012 · 1m 50s · 19 turns · 25 in / 6,576 out tokens

With context: $0.5135 · 2m 17s · 28 turns · 63 in / 7,876 out tokens

100%

Test Coverage Analysis: Prioritizing Testing Efforts

Coverage gap analysis — identifying areas needing more testing

Criteria
Without context
With context

Low-coverage files named

100%

100%

Per-file line coverage %

100%

100%

Branch coverage metric

100%

100%

Files ranked or prioritized

100%

100%

Overall aggregate coverage

100%

100%

Missing lines or uncovered area noted

100%

100%

High-coverage files identified

100%

100%

Without context: $0.2071 · 1m 5s · 10 turns · 17 in / 3,238 out tokens

With context: $0.3349 · 1m 38s · 17 turns · 52 in / 4,601 out tokens

100%

Test Report for Distributed Team Review

Project context and environment metadata in test reports

Criteria
Without context
With context

Project name included

100%

100%

Version or environment included

100%

100%

Build or CI reference included

100%

100%

Test framework info included

100%

100%

Pass rate computed correctly

100%

100%

Test duration included

100%

100%

Failing test names listed

100%

100%

Error vs failure distinguished

100%

100%

Failure messages included

100%

100%

Without context: $0.4076 · 2m 8s · 15 turns · 21 in / 8,199 out tokens

With context: $0.6978 · 3m 8s · 22 turns · 26 in / 13,793 out tokens

100%

Search Service Test Report for Dashboard and Team Review

Dual-format report generation (HTML and JSON outputs)

Criteria
Without context
With context

HTML file produced

100%

100%

JSON file produced

100%

100%

Consistent pass rate across formats

100%

100%

Coverage in both formats

100%

100%

Failing tests in both formats

100%

100%

HTML structured presentation

100%

100%

JSON nested structure

100%

100%

Duration in both formats

100%

100%

Without context: $0.4330 · 2m · 14 turns · 19 in / 9,197 out tokens

With context: $0.6576 · 2m 42s · 26 turns · 34 in / 11,003 out tokens

Evaluated
Agent
Claude Code
Model
Claude Sonnet 4.6

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.