or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

analysis-reporting.mdaspect-benchmarking.mdcli-tools.mdconfiguration.mdcore-benchmarking.mdindex.mdstorage-comparison.md
tile.json

tessl/pypi-pytest-benchmark

A pytest fixture for benchmarking code that automatically calibrates test runs for accurate performance measurements.

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/pytest-benchmark@5.1.x

To install, run

npx @tessl/cli install tessl/pypi-pytest-benchmark@5.1.0

index.mddocs/

pytest-benchmark

Overview

pytest-benchmark is a pytest plugin that provides a comprehensive fixture for benchmarking Python code. It automatically calibrates test runs to provide accurate performance measurements, integrates seamlessly with pytest's testing framework, and offers statistical analysis of results with multiple output formats.

Package Information

  • Name: pytest-benchmark
  • Type: pytest plugin / Python library
  • Language: Python
  • Installation: pip install pytest-benchmark

Core Imports

# Primary usage - the benchmark fixture is automatically available in pytest tests
import pytest

# For programmatic access to benchmarking classes (rarely needed)
from pytest_benchmark.fixture import BenchmarkFixture
from pytest_benchmark.session import BenchmarkSession

Basic Usage

Simple Function Benchmarking

def test_my_function(benchmark):
    # Benchmark a function with automatic calibration
    result = benchmark(my_function, arg1, arg2, kwarg=value)
    assert result == expected_value
def my_function(x, y):
    """Function to be benchmarked."""
    return x * y + sum(range(100))

def test_benchmark_example(benchmark):
    result = benchmark(my_function, 5, 10)
    assert result == 4999

Pedantic Mode for Precise Control

def test_pedantic_benchmark(benchmark):
    # Fine-grained control over benchmark execution
    result = benchmark.pedantic(
        target=my_function,
        args=(5, 10),
        rounds=10,
        iterations=1000
    )
    assert result == 4999

Capabilities

Core Benchmarking

The main benchmarking functionality through the benchmark fixture that provides automatic calibration, warmup, and statistical analysis.

Key APIs:

def benchmark(func, *args, **kwargs) -> Any: ...
def benchmark.pedantic(target, args=(), kwargs=None, setup=None, rounds=1, warmup_rounds=0, iterations=1) -> Any: ...

Core Benchmarking

Aspect-Oriented Benchmarking

Benchmark existing functions without modifying test code using aspect-oriented programming via the weave method.

Key APIs:

def benchmark.weave(target, **kwargs) -> None: ...
benchmark_weave = benchmark.weave  # Shortcut fixture

Aspect-Oriented Benchmarking

Configuration and Customization

Extensive configuration options via pytest command-line arguments and test markers for controlling benchmark behavior.

Key APIs:

@pytest.mark.benchmark(max_time=2.0, min_rounds=10, group="mygroup")
def test_example(benchmark): ...

Configuration and Customization

Results Storage and Comparison

Store benchmark results in various backends (file, Elasticsearch) and compare performance across runs.

Key APIs:

pytest --benchmark-save=baseline
pytest --benchmark-compare
pytest-benchmark compare

Results Storage and Comparison

Statistical Analysis and Reporting

Comprehensive statistical analysis with multiple output formats including tables, CSV, histograms, and cProfile integration.

Key APIs:

pytest --benchmark-histogram --benchmark-csv=results.csv
pytest --benchmark-cprofile=cumtime

Statistical Analysis and Reporting

Command Line Interface

Standalone CLI tools for managing and analyzing saved benchmark results outside of pytest.

Key APIs:

pytest-benchmark list
pytest-benchmark compare [options] [glob_or_file...]
pytest-benchmark help [command]

Command Line Interface