CtrlK
BlogDocsLog inGet started
Tessl Logo

generating-unit-tests

This skill enables Claude to automatically generate comprehensive unit tests from source code. It is triggered when the user requests unit tests, test cases, or test suites for specific files or code snippets. The skill supports multiple testing frameworks including Jest, pytest, JUnit, and others, intelligently detecting the appropriate framework or using one specified by the user. Use this skill when the user asks to "generate tests", "create unit tests", or uses the shortcut "gut" followed by a file path.

63

Quality

53%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./backups/skills-migration-20251108-070147/plugins/testing/unit-test-generator/skills/unit-test-generator/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that clearly communicates what the skill does (generates unit tests with multi-framework support), when to use it (explicit trigger phrases and shortcut), and includes natural keywords users would say. It uses proper third-person voice throughout and provides enough specificity to distinguish it from other coding-related skills.

DimensionReasoningScore

Specificity

The description lists multiple concrete actions: 'generate comprehensive unit tests from source code', supports 'Jest, pytest, JUnit', 'intelligently detecting the appropriate framework', and handles 'specific files or code snippets'. These are specific, actionable capabilities.

3 / 3

Completeness

Clearly answers both 'what' (generate comprehensive unit tests, supports multiple frameworks, detects appropriate framework) and 'when' (explicit 'Use this skill when...' clause with trigger phrases like 'generate tests', 'create unit tests', and the 'gut' shortcut).

3 / 3

Trigger Term Quality

Includes strong natural trigger terms users would actually say: 'generate tests', 'create unit tests', 'unit tests', 'test cases', 'test suites', and the shortcut 'gut'. Also mentions specific framework names (Jest, pytest, JUnit) which users might reference.

3 / 3

Distinctiveness Conflict Risk

The skill has a clear niche focused specifically on unit test generation from source code, with distinct triggers like 'gut' shortcut and specific framework names. It is unlikely to conflict with general coding or documentation skills.

3 / 3

Total

12

/

12

Passed

Implementation

7%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads like a product description or README rather than actionable instructions for Claude. It lacks any concrete code examples, test templates, or specific patterns to follow, instead describing what the skill does in abstract terms. The content wastes significant tokens explaining concepts Claude already understands while failing to provide the executable guidance needed to actually generate quality tests.

Suggestions

Replace the abstract 'How It Works' section with a concrete step-by-step workflow Claude should follow, including reading the source file, identifying testable units, and writing tests with specific structural patterns.

Add executable code templates for at least 2-3 frameworks (e.g., Jest, pytest) showing the exact test structure, assertion patterns, and mocking approaches Claude should use.

Include a concrete input/output example showing actual source code and the corresponding generated test file, rather than describing what the skill 'will do'.

Add validation steps such as checking that generated tests import correctly, cover the main code paths, and follow the project's existing test conventions.

DimensionReasoningScore

Conciseness

The content is verbose and explains concepts Claude already knows (what unit tests are, what mocking is, what edge cases are). Sections like 'How It Works', 'When to Use This Skill', and 'Integration' describe rather than instruct, wasting tokens on information that adds no actionable value.

1 / 3

Actionability

There is no executable code, no concrete commands, no template for test output, and no specific patterns to follow. The examples describe what the skill 'will do' in abstract terms rather than showing actual generated test code or providing copy-paste-ready templates.

1 / 3

Workflow Clarity

The 'How It Works' section lists abstract conceptual steps (analyze, determine, generate) rather than concrete operational steps Claude should follow. There are no validation checkpoints, no feedback loops for when tests fail or don't compile, and no verification that generated tests actually run.

1 / 3

Progressive Disclosure

The content is organized into sections with headers, which provides some structure. However, there are no references to external files, and content that could be split out (e.g., framework-specific templates, example outputs) is neither inline nor referenced—it's simply absent.

2 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.