Pytest-first Python testing with emphasis on fakes over mocks. Covers unit, integration, and async tests; fixture design; coverage setup; and debugging test failures. Use when writing tests, reviewing test quality, designing fixtures, setting up pytest, or debugging failures—e.g., "write unit tests for new feature", "fixture design patterns", "fakes vs mocks comparison", "fix failing tests".
94
93%
Does it follow best practices?
Impact
93%
1.10xAverage score across 3 eval scenarios
Passed
No known issues
Fakes over mocks for business logic
Fakes over mocks for DB
100%
100%
Fakes over mocks for notifications
100%
100%
Fake is in-memory only
100%
100%
Fake tracks mutations
100%
100%
Assertions on fake state
100%
100%
Same interface as real
75%
87%
Layer 4 test location
100%
100%
Layer 1 fake tests
0%
0%
Descriptive test names
62%
100%
Behavior not implementation
100%
100%
No mock patching in Layer 4
100%
100%
Test directory structure and fixture patterns
tests/unit/fakes/ directory
0%
100%
tests/unit/services/ directory
0%
100%
tests/integration/ directory
100%
100%
tests/conftest.py present
100%
100%
Factory fixture pattern
100%
100%
Yield-based teardown
0%
0%
Explicit fixture scope
100%
100%
Explicit over autouse
100%
100%
Two-way binding capture
100%
100%
Required packages listed
25%
100%
Descriptive test naming
100%
100%
Fixture composition
100%
100%
CLI testing and file operation anti-patterns
CliRunner not subprocess
100%
100%
No subprocess import
100%
100%
tmp_path for file operations
100%
100%
No hardcoded paths
100%
100%
No time.sleep
100%
100%
Public API only
100%
100%
Behavior-based assertions
100%
100%
CliRunner exit code checked
100%
100%
No speculative tests
100%
100%
Descriptive test naming
80%
100%
b74de5e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.