使用pytest的Python测试策略,包括TDD方法、夹具、模拟、参数化和覆盖率要求。
62
55%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./docs/zh-CN/skills/python-testing/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description effectively lists specific pytest-related capabilities and testing concepts, making it clear what the skill covers. However, it lacks an explicit 'Use when...' clause, which is critical for Claude to know when to select this skill. The description is in Chinese, which may limit trigger matching for English-speaking users.
Suggestions
Add an explicit 'Use when...' clause, e.g., '当用户需要编写Python测试、使用pytest框架、进行测试驱动开发或提高测试覆盖率时使用。'
Include common English and Chinese trigger term variations such as 'unit test', 'test cases', 'assert', '单元测试', '测试用例' to improve matching coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions/concepts: TDD methodology, fixtures, mocking, parameterization, and coverage requirements. These are all concrete, identifiable testing techniques rather than vague abstractions. | 3 / 3 |
Completeness | Clearly answers 'what does this do' (Python testing strategy with pytest including TDD, fixtures, mocking, parameterization, coverage), but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this at 2 per the rubric guidelines. | 2 / 3 |
Trigger Term Quality | Includes good keywords like 'pytest', 'Python', 'TDD', 'fixtures', 'mocking', 'parameterization', and 'coverage', but is missing common user variations such as 'unit test', 'test-driven development', 'test cases', '.py test files', or 'assert'. Also lacks English equivalents which could limit matching in multilingual contexts. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of pytest, Python testing, TDD, and specific testing techniques like fixtures and parameterization creates a clear niche. This is unlikely to conflict with other skills unless there are multiple Python testing skills, as the scope is well-defined. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a comprehensive pytest reference manual but suffers from extreme verbosity, explaining many concepts Claude already knows (basic assertions, simple test structure, how to run pytest). The code examples are high quality and executable, which is its main strength. However, the monolithic structure with no progressive disclosure and excessive coverage of basics makes it a poor fit for a SKILL.md that should be lean and assume Claude's competence.
Suggestions
Remove basic/trivial sections Claude already knows: basic assertions, basic test structure (test_addition, test_string_uppercase), and generic best practices. Focus only on project-specific conventions and non-obvious patterns.
Split into multiple files: keep SKILL.md as a concise overview (~50-80 lines) with links to separate files like FIXTURES.md, MOCKING.md, ASYNC_TESTING.md, and PATTERNS.md.
Add validation/feedback loops to the TDD workflow: e.g., 'Run pytest --cov after each green step to verify coverage target; if below 80%, add tests before proceeding to refactor.'
Remove the quick reference table and running tests section—these are standard pytest CLI knowledge that Claude already has.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~500+ lines. Explains basic pytest concepts Claude already knows (assertions, basic test structure, how to run pytest). The basic assertions section, basic test structure, and many trivial examples (test_addition, test_string_uppercase, test_list_append) waste significant tokens. The 'Do/Don't' best practices are generic testing advice Claude already possesses. | 1 / 3 |
Actionability | All code examples are concrete, executable, and copy-paste ready. Covers fixtures, parametrize, mocking, async testing, and configuration with real Python code and bash commands. | 3 / 3 |
Workflow Clarity | The TDD cycle (Red-Green-Refactor) is clearly sequenced, and test organization structure is well-defined. However, there are no validation checkpoints or feedback loops for the testing workflow itself (e.g., what to do when coverage drops, how to diagnose flaky tests, or iterative debugging steps). | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. All content is inline in a single massive document—API testing patterns, async testing, mocking, configuration, and basic examples are all crammed together with no signposting to separate reference files. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (817 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
5df943e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.