Write and evaluate effective Python tests using pytest. Use when writing tests, reviewing test code, debugging test failures, or improving test coverage. Covers test design, fixtures, parameterization, mocking, and async testing.
97
Quality
96%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Every test should be atomic, self-contained, and test single functionality. A test that tests multiple things is harder to debug and maintain.
Each test should verify a single behavior. The test name should tell you what's broken when it fails. Multiple assertions are fine when they all verify the same behavior.
# Good: Name tells you what's broken
def test_user_creation_sets_defaults():
user = User(name="Alice")
assert user.role == "member"
assert user.id is not None
assert user.created_at is not None
# Bad: If this fails, what behavior is broken?
def test_user():
user = User(name="Alice")
assert user.role == "member"
user.promote()
assert user.role == "admin"
assert user.can_delete_others()import pytest
@pytest.mark.parametrize("input,expected", [
("hello", "HELLO"),
("World", "WORLD"),
("", ""),
("123", "123"),
])
def test_uppercase_conversion(input, expected):
assert input.upper() == expectedDon't parameterize unrelated behaviors. If the test logic differs, write separate tests.
This project uses asyncio_mode = "auto" globally. Write async tests without decorators:
# Correct
async def test_async_operation():
result = await some_async_function()
assert result == expected
# Wrong - don't add this
@pytest.mark.asyncio
async def test_async_operation():
...Put ALL imports at the top of the file:
# Correct
import pytest
from fastmcp import FastMCP
from fastmcp.client import Client
async def test_something():
mcp = FastMCP("test")
...
# Wrong - no local imports
async def test_something():
from fastmcp import FastMCP # Don't do this
...Pass FastMCP servers directly to clients:
from fastmcp import FastMCP
from fastmcp.client import Client
mcp = FastMCP("TestServer")
@mcp.tool
def greet(name: str) -> str:
return f"Hello, {name}!"
async def test_greet_tool():
async with Client(mcp) as client:
result = await client.call_tool("greet", {"name": "World"})
assert result.data == "Hello, World!"CONSTRAINT — API version:
result.datais the FastMCP v3 accessor. The legacy v2 patternresult[0].textis incorrect for v3 and will raise anAttributeError. Never useresult[0].textin new tests.
Only use HTTP transport when explicitly testing network features.
Use inline-snapshot for testing JSON schemas and complex structures:
from inline_snapshot import snapshot
def test_schema_generation():
schema = generate_schema(MyModel)
assert schema == snapshot() # Will auto-populate on first runCommands:
pytest --inline-snapshot=create - populate empty snapshotspytest --inline-snapshot=fix - update after intentional changes@pytest.fixture
def client():
return Client()
async def test_with_client(client):
result = await client.ping()
assert result is not Nonetmp_path for file operationsdef test_file_writing(tmp_path):
file = tmp_path / "test.txt"
file.write_text("content")
assert file.read_text() == "content"from unittest.mock import patch, AsyncMock
async def test_external_api_call():
with patch("mymodule.external_client.fetch", new_callable=AsyncMock) as mock:
mock.return_value = {"data": "test"}
result = await my_function()
assert result == {"data": "test"}Test your code with real implementations when possible. Mock external services, not internal classes.
Use descriptive names that explain the scenario:
# Good
def test_login_fails_with_invalid_password():
def test_user_can_update_own_profile():
def test_admin_can_delete_any_user():
# Bad
def test_login():
def test_update():
def test_delete():import pytest
def test_raises_on_invalid_input():
with pytest.raises(ValueError, match="must be positive"):
calculate(-1)
async def test_async_raises():
with pytest.raises(ConnectionError):
await connect_to_invalid_host()uv run pytest -n auto # Run all tests in parallel
uv run pytest -n auto -x # Stop on first failure
uv run pytest path/to/test.py # Run specific file
uv run pytest -k "test_name" # Run tests matching pattern
uv run pytest -m "not integration" # Exclude integration testsBefore submitting tests:
@pytest.mark.asyncio decoratorsfd243f9
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.