Pytest plugin to create CodSpeed benchmarks
—
Core benchmarking functionality using pytest markers and fixtures. pytest-codspeed provides two primary approaches for measuring performance: decorating entire test functions or using the benchmark fixture for targeted measurement.
Decorators that automatically measure the execution time of entire test functions. When pytest-codspeed is enabled, only functions marked with these decorators will be executed.
@pytest.mark.benchmark
def test_function():
"""Mark entire test function for benchmarking."""
...
@pytest.mark.codspeed_benchmark
def test_function():
"""CodSpeed-specific benchmark marker."""
...import pytest
@pytest.mark.benchmark
def test_list_comprehension():
# The entire function execution time is measured
data = [x * x for x in range(1000)]
assert len(data) == 1000
@pytest.mark.benchmark
def test_generator_expression():
data = list(x * x for x in range(1000))
assert len(data) == 1000Provides precise control over what code gets measured by wrapping specific function calls. The fixture can only be used once per test function.
def benchmark(target: Callable[..., T], *args, **kwargs) -> T:
"""
Execute and measure the performance of target function.
Parameters:
- target: Function to benchmark
- *args: Positional arguments to pass to target
- **kwargs: Keyword arguments to pass to target
Returns:
The return value of target function
Raises:
RuntimeError: If benchmark fixture is used more than once per test
"""def test_sorting_performance(benchmark):
import random
data = [random.randint(1, 1000) for _ in range(1000)]
# Only the sort operation is measured
result = benchmark(sorted, data)
assert len(result) == 1000
def test_with_arguments(benchmark):
def calculate_sum(numbers, multiplier=1):
return sum(x * multiplier for x in numbers)
data = list(range(100))
result = benchmark(calculate_sum, data, multiplier=2)
assert result == 9900Alternative fixture name that provides identical functionality to the benchmark fixture.
def codspeed_benchmark(target: Callable[..., T], *args, **kwargs) -> T:
"""
CodSpeed-specific benchmark fixture with identical functionality to benchmark.
Parameters:
- target: Function to benchmark
- *args: Positional arguments to pass to target
- **kwargs: Keyword arguments to pass to target
Returns:
The return value of target function
"""When pytest-codspeed is enabled, it automatically:
benchmark fixture from pytest-benchmark if installed__benchmark for potential access# This will raise RuntimeError
def test_multiple_benchmark_calls(benchmark):
result1 = benchmark(sum, [1, 2, 3]) # First call - OK
result2 = benchmark(max, [1, 2, 3]) # Second call - Raises RuntimeErrorThe benchmark fixture enforces single-use per test to ensure measurement accuracy and prevent confusion about which operation is being benchmarked.
Install with Tessl CLI
npx tessl i tessl/pypi-pytest-codspeed