使用pytest、TDD方法、夹具、模拟、参数化和覆盖率要求的Python测试策略。
55
45%
Does it follow best practices?
Impact
56%
1.18xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./docs/zh-CN/skills/python-testing/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies the domain (Python testing with pytest) and lists relevant techniques, but reads more like a topic list than an actionable skill description. It lacks concrete action verbs describing what the skill does and completely omits guidance on when Claude should use it, making it difficult for Claude to reliably select this skill from a large pool.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'Use when writing Python tests, setting up pytest, implementing TDD, or when user mentions unit testing, test coverage, or mocking'
Convert the technique list into concrete actions: 'Writes pytest test cases, implements TDD workflows, creates fixtures and mocks, configures parameterized tests, and enforces coverage requirements'
Include common English trigger terms alongside Chinese to improve matching: 'pytest, unit tests, test-driven development, mocking, test fixtures'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Python testing) and lists several specific concepts (pytest, TDD, fixtures, mocking, parameterization, coverage requirements), but doesn't describe concrete actions like 'write tests', 'generate test cases', or 'analyze coverage reports'. | 2 / 3 |
Completeness | Describes 'what' (Python testing strategy with various techniques) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing explicit trigger guidance caps completeness at 2, and this is weaker than that. | 1 / 3 |
Trigger Term Quality | Includes relevant technical keywords (pytest, TDD, fixtures/夹具, mocking/模拟, parameterization/参数化, coverage/覆盖率) that users might mention, but missing common variations like 'unit tests', 'test cases', 'testing', or English equivalents that bilingual users might use. | 2 / 3 |
Distinctiveness Conflict Risk | Focuses specifically on Python testing with pytest which provides some distinction, but 'testing strategy' is broad enough to potentially overlap with other testing-related skills or general Python development skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides comprehensive, actionable pytest guidance with excellent executable code examples covering fixtures, mocking, parametrization, and async testing. However, it suffers from being overly verbose and monolithic - the content would be more effective split across multiple files with a concise overview. Some explanations of basic concepts Claude already knows could be trimmed.
Suggestions
Split content into separate files (FIXTURES.md, MOCKING.md, ASYNC.md, PATTERNS.md) and keep SKILL.md as a concise overview with clear navigation links
Remove explanations of concepts Claude already knows (e.g., what TDD is, basic assert semantics, what fixtures are conceptually) and focus on project-specific patterns
Add a troubleshooting/validation section with explicit steps for debugging failing tests and common error patterns
Trim the 'Best Practices' section to just the project-specific conventions rather than general testing advice
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is comprehensive but includes some unnecessary explanations (e.g., explaining what TDD is, basic assert concepts). Many sections could be tightened - the content is useful but verbose for an audience of Claude who already knows pytest fundamentals. | 2 / 3 |
Actionability | Excellent executable code examples throughout - all Python snippets are copy-paste ready with proper imports, complete function definitions, and realistic usage patterns. Commands for running pytest are specific and complete. | 3 / 3 |
Workflow Clarity | The TDD cycle (red-green-refactor) is clearly explained with steps, but the skill lacks validation checkpoints for test execution workflows. No explicit guidance on what to do when tests fail or how to debug failing tests systematically. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files. All content is inline in one massive document. The skill would benefit greatly from splitting into separate files (e.g., FIXTURES.md, MOCKING.md, ASYNC.md) with a concise overview. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (816 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
ae2cadd
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.