Pytest plugin to create CodSpeed benchmarks
npx @tessl/cli install tessl/pypi-pytest-codspeed@4.0.0A pytest plugin that enables performance benchmarking integrated with the CodSpeed continuous benchmarking platform. It provides both marker-based and fixture-based approaches for measuring performance, supporting both wall-clock time and instruction-based measurement.
pip install pytest-codspeedimport pytest
from pytest_codspeed import BenchmarkFixtureThe plugin automatically registers with pytest when installed, making the benchmark fixture and markers available.
import pytest
# Approach 1: Measure entire test function with marker
@pytest.mark.benchmark
def test_sum_performance():
data = list(range(1000))
result = sum(x * x for x in data)
assert result == 332833500
# Approach 2: Measure specific code with fixture
def test_calculation_performance(benchmark):
data = list(range(1000))
def calculate():
return sum(x * x for x in data)
result = benchmark(calculate)
assert result == 332833500pytest-codspeed integrates deeply with pytest through a comprehensive plugin system:
benchmark and codspeed_benchmark fixtures with advanced measurement options--codspeed flag and configuration optionsThe plugin automatically disables other benchmark plugins (pytest-benchmark, pytest-speed) when active to prevent conflicts.
Core benchmarking functionality using pytest markers and fixtures. Provides simple decorator-based and fixture-based approaches to measure function performance.
# Pytest markers
@pytest.mark.benchmark
@pytest.mark.codspeed_benchmark
# Fixture function
def benchmark(target: Callable, *args, **kwargs) -> Any: ...Command-line options and marker parameters for controlling benchmark behavior, including time limits, warmup periods, and measurement modes.
# CLI options: --codspeed, --codspeed-mode, --codspeed-warmup-time, etc.
# Marker options: group, min_time, max_time, max_rounds
@pytest.mark.benchmark(group="math", max_time=5.0, max_rounds=100)
def test_with_options(): ...Precise measurement control with setup/teardown functions, multiple rounds, and custom iterations for complex benchmarking scenarios.
def benchmark.pedantic(
target: Callable,
args: tuple = (),
kwargs: dict = {},
setup: Callable = None,
teardown: Callable = None,
rounds: int = 1,
warmup_rounds: int = 0,
iterations: int = 1
) -> Any: ...pytest-codspeed automatically detects CodSpeed CI environments via the CODSPEED_ENV environment variable. When running locally, use the --codspeed flag to enable benchmarking:
pytest tests/ --codspeedclass BenchmarkFixture:
"""Main benchmark fixture class providing measurement functionality."""
def __call__(self, target: Callable, *args, **kwargs) -> Any:
"""Execute and measure target function."""
...
def pedantic(
self,
target: Callable,
args: tuple = (),
kwargs: dict = {},
setup: Callable | None = None,
teardown: Callable | None = None,
rounds: int = 1,
warmup_rounds: int = 0,
iterations: int = 1
) -> Any:
"""Advanced benchmarking with setup/teardown control."""
...