Pytest Test Generator - Auto-activating skill for Test Automation. Triggers on: pytest test generator, pytest test generator Part of the Test Automation skill category.
34
3%
Does it follow best practices?
Impact
90%
1.00xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/09-test-automation/pytest-test-generator/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a title and category label with no substantive content. It lacks concrete actions, meaningful trigger terms, and explicit guidance on when Claude should select this skill. The duplicate trigger term suggests auto-generated boilerplate rather than a thoughtfully crafted description.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Generates pytest test functions, creates fixtures, adds parameterized test cases, and mocks dependencies for Python modules.'
Add an explicit 'Use when...' clause with natural trigger scenarios, e.g., 'Use when the user asks to write tests, generate unit tests, create pytest files, add test coverage, or mentions pytest, testing, or test cases for Python code.'
Include natural keyword variations users would say: 'unit tests', 'test cases', 'test coverage', 'write tests', 'Python testing', '.py test files', 'conftest', 'fixtures'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain ('pytest', 'test automation') but does not describe any concrete actions. There are no specific capabilities listed like 'generates unit tests', 'creates fixtures', 'mocks dependencies', etc. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond the name itself, and the 'when' clause is just a repeated trigger phrase rather than meaningful guidance. There is no explicit 'Use when...' clause with real trigger scenarios. | 1 / 3 |
Trigger Term Quality | The trigger terms are just 'pytest test generator' repeated twice. It misses natural user phrases like 'write tests', 'unit tests', 'test cases', 'generate pytest', 'testing', '.py test files', etc. | 1 / 3 |
Distinctiveness Conflict Risk | The mention of 'pytest' specifically provides some distinctiveness from general coding or other testing framework skills, but 'Test Automation' is broad enough to overlap with other testing-related skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is essentially a placeholder template with no actual instructional content. It repeatedly references 'pytest test generator' without ever providing concrete guidance, code examples, pytest patterns, mocking strategies, or any actionable information. It fails on every dimension because it contains no substance—only meta-descriptions of what a skill would theoretically do.
Suggestions
Add concrete, executable pytest code examples showing test generation patterns (e.g., parametrized tests, fixtures, mocking with pytest-mock)
Define a clear workflow for generating tests: analyze source code → identify test cases → write test functions → run and validate with `pytest --tb=short`
Remove all meta-description sections ('Purpose', 'When to Use', 'Example Triggers') and replace with actionable content like quick-start examples, common patterns, and pytest configuration snippets
Include specific pytest best practices such as fixture scoping, conftest.py organization, assertion patterns, and common mocking approaches with concrete code
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is entirely filler and meta-description. It explains what the skill does in abstract terms without providing any actual technical content. Every section restates the same vague information about 'pytest test generator' without adding substance. | 1 / 3 |
Actionability | There is zero concrete guidance—no code examples, no commands, no specific patterns, no pytest configuration, no test templates. The content describes rather than instructs, offering nothing executable or copy-paste ready. | 1 / 3 |
Workflow Clarity | No workflow, steps, or process is defined. The skill claims to provide 'step-by-step guidance' but contains none. There are no validation checkpoints or sequenced instructions of any kind. | 1 / 3 |
Progressive Disclosure | The content is a flat, repetitive document with no meaningful structure. There are no references to detailed files, no layered content organization, and the sections are superficial headers over near-identical vague statements. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3076d78
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.