使用pytest的Python测试策略,包括TDD方法、夹具、模拟、参数化和覆盖率要求。
49
55%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./docs/zh-CN/skills/python-testing/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description does a good job listing specific pytest-related capabilities and testing concepts, making it clear what the skill covers. However, it lacks an explicit 'Use when...' clause, which is critical for Claude to know when to select this skill. Adding trigger guidance and a few more natural user terms would significantly improve its effectiveness.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about writing Python tests, creating test suites, setting up pytest, or improving test coverage.'
Include common user-facing trigger terms and variations such as 'unit tests', 'test cases', 'test_*.py', 'conftest.py', and 'test-driven development'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions/concepts: TDD methodology, fixtures, mocking, parameterization, and coverage requirements. These are all concrete, identifiable testing techniques rather than vague abstractions. | 3 / 3 |
Completeness | Clearly answers 'what does this do' (Python testing strategy with pytest including TDD, fixtures, mocking, parameterization, coverage), but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this at 2 per the rubric. | 2 / 3 |
Trigger Term Quality | Includes good keywords like 'pytest', 'Python', 'TDD', 'fixtures', 'mocking', 'parameterization', and 'coverage', but is missing common user variations such as 'unit tests', 'test cases', 'test-driven development' (English), '.py tests', or 'assert'. Also lacks file extension triggers like 'test_*.py' or 'conftest.py'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'pytest', 'Python testing', 'TDD', and specific testing concepts like fixtures and parameterization creates a clear niche that is unlikely to conflict with other skills. It is distinctly about Python test strategy with pytest. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a comprehensive pytest reference manual rather than a focused, efficient skill document. While the code examples are excellent and fully executable, the document is far too verbose — it explains many concepts Claude already knows (basic assertions, TDD definition, what fixtures are) and dumps everything into a single monolithic file. It would benefit enormously from aggressive trimming and splitting into focused sub-documents.
Suggestions
Cut the document to ~100 lines by removing content Claude already knows (basic assertions, basic test structure, TDD definition) and keeping only project-specific conventions, non-obvious patterns, and the quick reference table.
Split detailed sections (async testing, mocking patterns, configuration) into separate referenced files like MOCKING.md, ASYNC_TESTING.md, CONFIG.md with clear navigation links from the main skill.
Add validation/feedback loops: e.g., 'After writing tests, run `pytest --cov` and verify coverage meets 80% threshold. If below, identify untested paths with `--cov-report=term-missing` and add targeted tests.'
Remove the basic assertions catalog entirely — Claude knows Python assertions. Instead, focus only on pytest-specific patterns like `pytest.raises` with match, and project-specific assertion helpers if any.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~600+ lines. Explains basic pytest concepts Claude already knows (assertions, basic test structure, what TDD is). The basic assertions section alone lists trivial patterns like `assert result == expected`. The 'basic test structure' section with `assert 2 + 2 == 4` adds no value. Much of this is reference documentation that Claude doesn't need. | 1 / 3 |
Actionability | All code examples are concrete, executable, and copy-paste ready. Includes complete pytest commands, configuration files (pytest.ini, pyproject.toml), and real patterns for API testing, database testing, async testing, and mocking. | 3 / 3 |
Workflow Clarity | The TDD cycle (red-green-refactor) is clearly sequenced, and test organization structure is well-defined. However, there are no validation checkpoints or feedback loops — e.g., no guidance on what to do when coverage drops below 80%, no steps for diagnosing flaky tests, and no verification steps after setting up test infrastructure. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. All content — from basic assertions to async testing to configuration — is inlined in a single massive document. Content like the full assertions reference, async patterns, and configuration examples should be split into separate files with clear navigation links. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (817 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
841beea
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.