or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

advanced-benchmarking.mdbasic-benchmarking.mdconfiguration.mdindex.md
tile.json

tessl/pypi-pytest-codspeed

Pytest plugin to create CodSpeed benchmarks

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/pytest-codspeed@4.0.x

To install, run

npx @tessl/cli install tessl/pypi-pytest-codspeed@4.0.0

index.mddocs/

pytest-codspeed

A pytest plugin that enables performance benchmarking integrated with the CodSpeed continuous benchmarking platform. It provides both marker-based and fixture-based approaches for measuring performance, supporting both wall-clock time and instruction-based measurement.

Package Information

  • Package Name: pytest-codspeed
  • Language: Python
  • Installation: pip install pytest-codspeed
  • Supported Python: 3.9+

Core Imports

import pytest
from pytest_codspeed import BenchmarkFixture

The plugin automatically registers with pytest when installed, making the benchmark fixture and markers available.

Basic Usage

import pytest

# Approach 1: Measure entire test function with marker
@pytest.mark.benchmark
def test_sum_performance():
    data = list(range(1000))
    result = sum(x * x for x in data)
    assert result == 332833500

# Approach 2: Measure specific code with fixture
def test_calculation_performance(benchmark):
    data = list(range(1000))
    
    def calculate():
        return sum(x * x for x in data)
    
    result = benchmark(calculate)
    assert result == 332833500

Architecture

pytest-codspeed integrates deeply with pytest through a comprehensive plugin system:

  • Plugin Hooks: Customizes pytest behavior to filter and execute benchmarks
  • Measurement Instruments: Pluggable backends for walltime vs instruction-based measurement
  • Fixture System: Provides benchmark and codspeed_benchmark fixtures with advanced measurement options
  • CLI Integration: Adds --codspeed flag and configuration options
  • Results Reporting: Rich terminal output and JSON export for CI/CD integration

The plugin automatically disables other benchmark plugins (pytest-benchmark, pytest-speed) when active to prevent conflicts.

Capabilities

Basic Benchmarking

Core benchmarking functionality using pytest markers and fixtures. Provides simple decorator-based and fixture-based approaches to measure function performance.

# Pytest markers
@pytest.mark.benchmark
@pytest.mark.codspeed_benchmark

# Fixture function
def benchmark(target: Callable, *args, **kwargs) -> Any: ...

Basic Benchmarking

Configuration Options

Command-line options and marker parameters for controlling benchmark behavior, including time limits, warmup periods, and measurement modes.

# CLI options: --codspeed, --codspeed-mode, --codspeed-warmup-time, etc.
# Marker options: group, min_time, max_time, max_rounds

@pytest.mark.benchmark(group="math", max_time=5.0, max_rounds=100)
def test_with_options(): ...

Configuration

Advanced Benchmarking

Precise measurement control with setup/teardown functions, multiple rounds, and custom iterations for complex benchmarking scenarios.

def benchmark.pedantic(
    target: Callable,
    args: tuple = (),
    kwargs: dict = {},
    setup: Callable = None,
    teardown: Callable = None,
    rounds: int = 1,
    warmup_rounds: int = 0,
    iterations: int = 1
) -> Any: ...

Advanced Benchmarking

Environment Integration

pytest-codspeed automatically detects CodSpeed CI environments via the CODSPEED_ENV environment variable. When running locally, use the --codspeed flag to enable benchmarking:

pytest tests/ --codspeed

Types

class BenchmarkFixture:
    """Main benchmark fixture class providing measurement functionality."""
    
    def __call__(self, target: Callable, *args, **kwargs) -> Any:
        """Execute and measure target function."""
        ...
    
    def pedantic(
        self,
        target: Callable,
        args: tuple = (),
        kwargs: dict = {},
        setup: Callable | None = None,
        teardown: Callable | None = None,
        rounds: int = 1,
        warmup_rounds: int = 0,
        iterations: int = 1
    ) -> Any:
        """Advanced benchmarking with setup/teardown control."""
        ...