CtrlK
BlogDocsLog inGet started
Tessl Logo

python-testing

使用pytest的Python测试策略,包括TDD方法、夹具、模拟、参数化和覆盖率要求。

49

Quality

55%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./docs/zh-CN/skills/python-testing/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is strong in specificity and distinctiveness, clearly identifying pytest-based Python testing with concrete techniques like TDD, fixtures, mocking, and parameterization. However, it lacks an explicit 'Use when...' clause, which is critical for Claude to know when to select this skill, and could benefit from additional natural trigger term variations (e.g., 'unit tests', 'test cases').

Suggestions

Add an explicit 'Use when...' clause, e.g., '当用户需要编写Python测试、创建单元测试、使用pytest或进行测试驱动开发时使用。'

Include additional natural trigger terms users might say, such as 'unit tests', 'test cases', 'test suite', 'assert', or 'test-driven development' to improve keyword coverage.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions/concepts: TDD methodology, fixtures, mocking, parameterization, and coverage requirements. These are all concrete, identifiable testing techniques rather than vague abstractions.

3 / 3

Completeness

Clearly answers 'what does this do' (Python testing strategy with pytest including TDD, fixtures, mocking, parameterization, coverage), but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this dimension at 2 per the rubric.

2 / 3

Trigger Term Quality

Includes good keywords like 'pytest', 'Python', 'TDD', 'fixtures', 'mocking', 'parameterization', and 'coverage', but is missing common user variations such as 'unit tests', 'test cases', 'test-driven development' (English), '.py tests', or 'assert'. Also entirely in Chinese which may limit matching for English-speaking users.

2 / 3

Distinctiveness Conflict Risk

The combination of 'pytest', 'Python testing', 'TDD', and specific testing concepts like fixtures and parameterization creates a clear niche that is unlikely to conflict with other skills. It is distinctly about Python test strategy with pytest.

3 / 3

Total

10

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a comprehensive pytest reference manual but fails as a SKILL.md by being excessively verbose and monolithic. It explains many concepts Claude already knows (basic assertions, what TDD means, how fixtures work) rather than focusing on project-specific conventions or novel patterns. The actionability is strong with executable code throughout, but the sheer volume undermines its utility as context-window content.

Suggestions

Reduce content by 60-70%: remove basic pytest knowledge Claude already has (assertions syntax, basic fixture usage, standard test structure) and focus only on project-specific conventions, preferred patterns, and non-obvious decisions.

Split into multiple files: keep SKILL.md as a concise overview (~50-80 lines) with references to separate files like FIXTURES.md, MOCKING.md, ASYNC_TESTING.md, and CONFIG.md.

Remove the entire 'pytest 基础' and '断言' sections—Claude knows how assert works and how to write basic tests. Replace with a brief note like 'Use standard pytest assertions and fixtures.'

Add validation checkpoints to the TDD workflow: e.g., 'Run pytest --cov after each green step to verify coverage target is maintained' and 'If coverage drops below 80%, identify untested paths before proceeding.'

DimensionReasoningScore

Conciseness

Extremely verbose at ~600+ lines. Explains basic pytest concepts Claude already knows (assertions, basic test structure, what TDD is). The basic assertions section alone lists every comparison operator. Much of this is standard pytest documentation that adds no novel value.

1 / 3

Actionability

All code examples are executable and copy-paste ready. Concrete commands for running tests, configuration files, and complete fixture/mock patterns are provided with real Python code rather than pseudocode.

3 / 3

Workflow Clarity

The TDD cycle (red-green-refactor) is clearly sequenced, and test organization structure is well-defined. However, there are no validation checkpoints for the overall testing workflow—no guidance on what to do when coverage drops, no feedback loops for fixing failing tests in CI, and no verification steps for the test infrastructure setup.

2 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. Everything from basic assertions to async testing to configuration is inlined in a single massive document. Content like the full assertions reference, async patterns, and configuration examples could easily be split into separate files.

1 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (817 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
affaan-m/everything-claude-code
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.