CtrlK
BlogDocsLog inGet started
Tessl Logo

python-testing

Pytest-first Python testing with emphasis on fakes over mocks. Covers unit, integration, and async tests; fixture design; coverage setup; and debugging test failures. Use when writing tests, reviewing test quality, designing fixtures, setting up pytest, or debugging failures—e.g., "write unit tests for new feature", "fixture design patterns", "fakes vs mocks comparison", "fix failing tests".

94

1.10x
Quality

93%

Does it follow best practices?

Impact

93%

1.10x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Evaluation results

91%

4%

Subscription Service Tests

Fakes over mocks for business logic

Criteria
Without context
With context

Fakes over mocks for DB

100%

100%

Fakes over mocks for notifications

100%

100%

Fake is in-memory only

100%

100%

Fake tracks mutations

100%

100%

Assertions on fake state

100%

100%

Same interface as real

75%

87%

Layer 4 test location

100%

100%

Layer 1 fake tests

0%

0%

Descriptive test names

62%

100%

Behavior not implementation

100%

100%

No mock patching in Layer 4

100%

100%

90%

22%

Inventory Tracker Test Suite Setup

Test directory structure and fixture patterns

Criteria
Without context
With context

tests/unit/fakes/ directory

0%

100%

tests/unit/services/ directory

0%

100%

tests/integration/ directory

100%

100%

tests/conftest.py present

100%

100%

Factory fixture pattern

100%

100%

Yield-based teardown

0%

0%

Explicit fixture scope

100%

100%

Explicit over autouse

100%

100%

Two-way binding capture

100%

100%

Required packages listed

25%

100%

Descriptive test naming

100%

100%

Fixture composition

100%

100%

100%

1%

Report Exporter CLI Tests

CLI testing and file operation anti-patterns

Criteria
Without context
With context

CliRunner not subprocess

100%

100%

No subprocess import

100%

100%

tmp_path for file operations

100%

100%

No hardcoded paths

100%

100%

No time.sleep

100%

100%

Public API only

100%

100%

Behavior-based assertions

100%

100%

CliRunner exit code checked

100%

100%

No speculative tests

100%

100%

Descriptive test naming

80%

100%

Repository
jjjermiah/dotagents
Evaluated
Agent
Claude Code
Model
Claude Sonnet 4.6

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.