Parses and analyzes test output to identify failing tests, flaky tests, coverage gaps, defect patterns, and release readiness. Use when the user shares test results, CI/CD pipeline output, pytest/JUnit/Mocha logs, test coverage reports, or asks about test failures, pass rates, or quality metrics. Produces structured reports with root cause analysis, risk assessment, and prioritized recommendations for development teams and stakeholders.
87
85%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Parses raw test data into actionable insights: failure root causes, coverage gaps, defect trends, and release readiness assessments. Applies statistical methods to validate findings and produces stakeholder-specific reports.
TEMPLATES.md)| Criterion | GO Threshold | CONDITIONAL | NO-GO |
|---|---|---|---|
| Test pass rate | ≥ 95% | 90–94% | < 90% |
| Line coverage | ≥ 80% | 70–79% | < 70% |
| Critical defects open | 0 | — | ≥ 1 |
| Performance SLA met | Yes | Degraded < 10% | Degraded ≥ 10% |
| Security tests passing | 100% | — | < 100% |
Adjust thresholds when the user specifies project-specific standards.
Full implementations of parse_junit_xml and detect_flaky_tests are in EXAMPLES.md. Invoke them as follows:
# Parse a JUnit XML report
report = parse_junit_xml("test-results.xml")
print(f"Pass rate: {report['summary']['pass_rate']}%")
print(f"Failures by class: {report['failures_by_class']}")
# Detect flaky tests across multiple run result dicts {test_name: "pass"|"fail"}
flaky = detect_flaky_tests(run_results)
print(f"Flaky tests: {flaky}")pytest --cov, nyc, jacoco).010799b
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.