CtrlK
BlogDocsLog inGet started
Tessl Logo

senior-qa

Generates unit tests, integration tests, and E2E tests for React/Next.js applications. Scans components to create Jest + React Testing Library test stubs, analyzes Istanbul/LCOV coverage reports to surface gaps, scaffolds Playwright test files from Next.js routes, mocks API calls with MSW, creates test fixtures, and configures test runners. Use when the user asks to "generate tests", "write unit tests", "analyze test coverage", "scaffold E2E tests", "set up Playwright", "configure Jest", "implement testing patterns", or "improve test quality".

87

1.26x
Quality

78%

Does it follow best practices?

Impact

92%

1.26x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./engineering-team/senior-qa/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that hits all the marks. It provides highly specific capabilities with named tools and technologies, includes comprehensive natural trigger terms in an explicit 'Use when...' clause, and carves out a distinct niche around React/Next.js testing. The description is thorough yet focused, using proper third-person voice throughout.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: scanning components to create Jest + RTL test stubs, analyzing Istanbul/LCOV coverage reports, scaffolding Playwright test files from Next.js routes, mocking API calls with MSW, creating test fixtures, and configuring test runners.

3 / 3

Completeness

Clearly answers both 'what' (generates unit/integration/E2E tests, scans components, analyzes coverage, scaffolds Playwright files, mocks APIs, creates fixtures, configures runners) and 'when' with an explicit 'Use when...' clause listing eight distinct trigger phrases.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including 'generate tests', 'write unit tests', 'analyze test coverage', 'scaffold E2E tests', 'set up Playwright', 'configure Jest', 'implement testing patterns', and 'improve test quality'. Also includes technology-specific terms like MSW, Istanbul, LCOV, React Testing Library that users would naturally mention.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: React/Next.js testing specifically, with named tools (Jest, RTL, Playwright, MSW, Istanbul/LCOV). The technology stack specificity and testing focus make it very unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is well-organized with good progressive disclosure and clear workflow structure, but suffers from referencing Python scripts that may not exist and containing syntax errors in code examples that undermine actionability. The common patterns section largely duplicates knowledge Claude already has, reducing token efficiency. Adding validation steps to workflows and fixing code examples would significantly improve quality.

Suggestions

Fix syntax errors in code examples (e.g., the malformed expect/getByRole statements in Button.test.tsx and the Quick Reference section) to make them truly copy-paste ready.

Add validation checkpoints and error recovery guidance to workflows, e.g., 'If generated tests fail to compile, check for missing imports' or 'If coverage threshold not met, re-run with --uncovered-only'.

Remove or significantly trim the 'Common Patterns Quick Reference' and 'Common Commands' sections, as these cover standard testing library usage that Claude already knows, or consolidate them into a referenced file.

Clarify whether the Python scripts (test_suite_generator.py, coverage_analyzer.py, e2e_test_scaffolder.py) are provided with the skill or need to be created, and if provided, reference their location explicitly.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary verbosity. The 'Common Commands' section at the end repeats commands already shown in workflows. The 'Common Patterns Quick Reference' section covers standard React Testing Library and Playwright patterns that Claude already knows well (query priorities, async patterns, MSW setup). These add token cost without much unique value.

2 / 3

Actionability

The skill references Python scripts (test_suite_generator.py, coverage_analyzer.py, e2e_test_scaffolder.py) extensively but never provides their implementation or confirms they exist. The generated test code examples contain syntax errors (e.g., malformed expect statements like `{ name: "click-mei-tobeinthedocument"` and `{ name: "submiti"`), making them not copy-paste ready. The CLI tool flags appear invented without verifiable documentation.

2 / 3

Workflow Clarity

The three workflows (Unit Test Generation, Coverage Analysis, E2E Test Setup) are clearly sequenced with numbered steps, which is good. However, they lack explicit validation checkpoints and error recovery steps. For example, there's no 'if tests fail, do X' guidance, no validation after generating test stubs to ensure they compile, and no feedback loop for fixing issues found by the coverage analyzer.

2 / 3

Progressive Disclosure

The skill has a clear Quick Start section, organized tool descriptions, detailed workflows, and a reference table pointing to separate files (references/testing_strategies.md, references/test_automation_patterns.md, references/qa_best_practices.md). Navigation is well-signaled and references are one level deep.

3 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
alirezarezvani/claude-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.