CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-pytest-codspeed

Pytest plugin to create CodSpeed benchmarks

Pending
Overview
Eval results
Files

basic-benchmarking.mddocs/

Basic Benchmarking

Core benchmarking functionality using pytest markers and fixtures. pytest-codspeed provides two primary approaches for measuring performance: decorating entire test functions or using the benchmark fixture for targeted measurement.

Capabilities

Benchmark Markers

Decorators that automatically measure the execution time of entire test functions. When pytest-codspeed is enabled, only functions marked with these decorators will be executed.

@pytest.mark.benchmark
def test_function():
    """Mark entire test function for benchmarking."""
    ...

@pytest.mark.codspeed_benchmark  
def test_function():
    """CodSpeed-specific benchmark marker."""
    ...

Usage Example

import pytest

@pytest.mark.benchmark
def test_list_comprehension():
    # The entire function execution time is measured
    data = [x * x for x in range(1000)]
    assert len(data) == 1000

@pytest.mark.benchmark
def test_generator_expression():
    data = list(x * x for x in range(1000))
    assert len(data) == 1000

Benchmark Fixture

Provides precise control over what code gets measured by wrapping specific function calls. The fixture can only be used once per test function.

def benchmark(target: Callable[..., T], *args, **kwargs) -> T:
    """
    Execute and measure the performance of target function.
    
    Parameters:
    - target: Function to benchmark
    - *args: Positional arguments to pass to target
    - **kwargs: Keyword arguments to pass to target
    
    Returns:
    The return value of target function
    
    Raises:
    RuntimeError: If benchmark fixture is used more than once per test
    """

Usage Example

def test_sorting_performance(benchmark):
    import random
    data = [random.randint(1, 1000) for _ in range(1000)]
    
    # Only the sort operation is measured
    result = benchmark(sorted, data)
    assert len(result) == 1000

def test_with_arguments(benchmark):
    def calculate_sum(numbers, multiplier=1):
        return sum(x * multiplier for x in numbers)
    
    data = list(range(100))
    result = benchmark(calculate_sum, data, multiplier=2)
    assert result == 9900

CodSpeed Benchmark Fixture

Alternative fixture name that provides identical functionality to the benchmark fixture.

def codspeed_benchmark(target: Callable[..., T], *args, **kwargs) -> T:
    """
    CodSpeed-specific benchmark fixture with identical functionality to benchmark.
    
    Parameters:
    - target: Function to benchmark  
    - *args: Positional arguments to pass to target
    - **kwargs: Keyword arguments to pass to target
    
    Returns:
    The return value of target function
    """

Fixture Compatibility

When pytest-codspeed is enabled, it automatically:

  • Replaces the benchmark fixture from pytest-benchmark if installed
  • Disables pytest-benchmark plugin to prevent conflicts
  • Disables pytest-speed plugin to prevent conflicts
  • Archives the original benchmark fixture as __benchmark for potential access

Measurement Behavior

When CodSpeed is Disabled

  • Marker-decorated functions execute normally without measurement
  • Benchmark fixtures execute the target function without measurement overhead

When CodSpeed is Enabled

  • Only benchmark-marked functions or tests using benchmark fixtures are executed
  • Other test functions are automatically deselected
  • Actual measurement occurs using the configured instrument (walltime or instrumentation mode)

Error Handling

# This will raise RuntimeError
def test_multiple_benchmark_calls(benchmark):
    result1 = benchmark(sum, [1, 2, 3])  # First call - OK
    result2 = benchmark(max, [1, 2, 3])  # Second call - Raises RuntimeError

The benchmark fixture enforces single-use per test to ensure measurement accuracy and prevent confusion about which operation is being benchmarked.

Install with Tessl CLI

npx tessl i tessl/pypi-pytest-codspeed

docs

advanced-benchmarking.md

basic-benchmarking.md

configuration.md

index.md

tile.json