A pytest fixture for benchmarking code that automatically calibrates test runs for accurate performance measurements.
npx @tessl/cli install tessl/pypi-pytest-benchmark@5.1.0pytest-benchmark is a pytest plugin that provides a comprehensive fixture for benchmarking Python code. It automatically calibrates test runs to provide accurate performance measurements, integrates seamlessly with pytest's testing framework, and offers statistical analysis of results with multiple output formats.
pip install pytest-benchmark# Primary usage - the benchmark fixture is automatically available in pytest tests
import pytest
# For programmatic access to benchmarking classes (rarely needed)
from pytest_benchmark.fixture import BenchmarkFixture
from pytest_benchmark.session import BenchmarkSessiondef test_my_function(benchmark):
# Benchmark a function with automatic calibration
result = benchmark(my_function, arg1, arg2, kwarg=value)
assert result == expected_valuedef my_function(x, y):
"""Function to be benchmarked."""
return x * y + sum(range(100))
def test_benchmark_example(benchmark):
result = benchmark(my_function, 5, 10)
assert result == 4999def test_pedantic_benchmark(benchmark):
# Fine-grained control over benchmark execution
result = benchmark.pedantic(
target=my_function,
args=(5, 10),
rounds=10,
iterations=1000
)
assert result == 4999The main benchmarking functionality through the benchmark fixture that provides automatic calibration, warmup, and statistical analysis.
Key APIs:
def benchmark(func, *args, **kwargs) -> Any: ...
def benchmark.pedantic(target, args=(), kwargs=None, setup=None, rounds=1, warmup_rounds=0, iterations=1) -> Any: ...Benchmark existing functions without modifying test code using aspect-oriented programming via the weave method.
Key APIs:
def benchmark.weave(target, **kwargs) -> None: ...
benchmark_weave = benchmark.weave # Shortcut fixtureExtensive configuration options via pytest command-line arguments and test markers for controlling benchmark behavior.
Key APIs:
@pytest.mark.benchmark(max_time=2.0, min_rounds=10, group="mygroup")
def test_example(benchmark): ...Configuration and Customization
Store benchmark results in various backends (file, Elasticsearch) and compare performance across runs.
Key APIs:
pytest --benchmark-save=baseline
pytest --benchmark-compare
pytest-benchmark compareResults Storage and Comparison
Comprehensive statistical analysis with multiple output formats including tables, CSV, histograms, and cProfile integration.
Key APIs:
pytest --benchmark-histogram --benchmark-csv=results.csv
pytest --benchmark-cprofile=cumtimeStatistical Analysis and Reporting
Standalone CLI tools for managing and analyzing saved benchmark results outside of pytest.
Key APIs:
pytest-benchmark list
pytest-benchmark compare [options] [glob_or_file...]
pytest-benchmark help [command]