Generates unit tests for a function or class by analyzing branches, boundaries, and error paths — then emits test code in the project's existing framework and style. Covers happy path, edge cases, and failure modes with mocks for external dependencies. Use when writing tests for new code, when backfilling coverage on untested functions, when the user asks to generate tests, or when a coverage report shows specific gaps.
Install with Tessl CLI
npx tessl i github:santosomar/general-secure-coding-agent-skills --skill unit-test-generator97
Quality
96%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly articulates specific capabilities (analyzing code paths, generating framework-appropriate tests, handling mocks) and provides comprehensive trigger guidance. It uses proper third-person voice throughout and includes natural developer terminology that would match real user requests.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple concrete actions: 'analyzing branches, boundaries, and error paths', 'emits test code', 'Covers happy path, edge cases, and failure modes', 'mocks for external dependencies'. Very specific about what it does. | 3 / 3 |
Completeness | Clearly answers both what (generates unit tests with specific methodology) AND when with explicit 'Use when...' clause covering four distinct trigger scenarios: new code, backfilling coverage, user requests, and coverage gaps. | 3 / 3 |
Trigger Term Quality | Includes natural terms users would say: 'unit tests', 'tests', 'coverage', 'generate tests', 'untested functions', 'coverage report', 'edge cases', 'mocks'. Good coverage of how developers naturally discuss testing. | 3 / 3 |
Distinctiveness Conflict Risk | Highly specific to unit test generation with clear niche. Terms like 'unit tests', 'coverage report', 'mocks', 'test code' are distinct from general coding skills. Unlikely to conflict with code review or general coding assistance skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
92%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is an excellent, highly actionable skill that provides comprehensive guidance for unit test generation. The 5-step workflow is clear and the worked example demonstrates exactly how to apply each concept. The oracle validation section is particularly valuable for preventing common testing anti-patterns. Minor improvement opportunity in splitting reference tables into separate files for better progressive disclosure.
Suggestions
Consider extracting the per-type input edges table and dependency replacement table into a separate REFERENCE.md file, keeping SKILL.md as a leaner overview with links
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is dense with actionable information and avoids explaining concepts Claude already knows. Every section earns its place with specific tables, concrete examples, and precise guidance. No padding or unnecessary context. | 3 / 3 |
Actionability | Provides fully executable code examples, specific framework detection criteria, concrete enumeration tables, and a complete worked example with 7 runnable pytest tests. Copy-paste ready throughout. | 3 / 3 |
Workflow Clarity | Clear 5-step sequential process with explicit checkpoints. Each step has concrete deliverables (detect framework → enumerate coverage → find oracle → isolate dependencies → emit). The oracle validation step explicitly prevents the common anti-pattern of tautological tests. | 3 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and tables, but it's a monolithic document (~300 lines) that could benefit from splitting detailed reference tables (per-type edges, dependency replacement) into separate files. References to other skills exist but inline content is heavy. | 2 / 3 |
Total | 11 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.