Python module to run and analyze benchmarks with high precision and statistical rigor
—
Comprehensive command-line tools for running benchmarks, analyzing results, and managing benchmark data. PyPerf provides a full-featured CLI accessible via python -m pyperf with specialized commands for different benchmarking scenarios.
The PyPerf CLI provides extensive command-line functionality through various subcommands.
# Command-line interface: python -m pyperf <command> [options]
# Core benchmarking commands:
# timeit - Python code benchmarking (similar to timeit module)
# command - External command benchmarking
# Analysis and display commands:
# show - Display benchmark results with formatting options
# compare_to - Compare multiple benchmark files
# stats - Show detailed statistics
# hist - Display histogram of measurement values
# metadata - Show benchmark metadata
# check - Validate benchmark stability and detect issues
# Data management commands:
# convert - Modify and transform benchmark files
# dump - Display raw benchmark data in various formats
# slowest - List benchmarks sorted by execution time
# System commands:
# system - System configuration recommendations for benchmarking
# collect_metadata - Gather comprehensive system metadataSpecialized runner for timeit-style Python code benchmarking with command-line integration.
class TimeitRunner(pyperf.Runner):
"""Command-line interface for Python code benchmarking."""
def __init__(self):
"""Initialize timeit runner with CLI argument parsing."""Command-line wrapper for external command benchmarking.
class CommandRunner(pyperf.Runner):
"""Command-line interface for external command benchmarking."""
def __init__(self, cmd):
"""
Initialize command runner.
Args:
cmd: Argument parser for command configuration
"""Standard options available across most commands:
# Execution control options:
# --rigorous - More thorough benchmarking (more processes/values)
# --fast - Quick rough measurements (fewer processes/values)
# --debug-single-value - Debug mode with single measurement
# --quiet - Suppress warnings and verbose output
# --verbose - Enable detailed output and progress information
# Process and measurement options:
# --processes N - Number of worker processes
# --values N - Number of measurements per process
# --loops N - Number of loops per measurement
# --warmups N - Number of warmup iterations
# --min-time SECONDS - Minimum measurement duration
# System optimization options:
# --affinity CPUS - CPU affinity for worker processes (e.g., "0-3" or "0,2,4")
# --tracemalloc - Enable memory allocation tracking
# --track-memory - Track memory usage during benchmarks
# --profile - Collect cProfile profiling data
# --inherit-environ - Worker processes inherit environment variables
# --copy-env - Copy environment to worker processes
# Process management options:
# --timeout SECONDS - Maximum benchmark execution time
# --worker - Run in worker process mode (internal)
# --worker-task - Worker task identifier (internal)
# --pipe FD - Pipe output to file descriptor
# Calibration options:
# --calibrate-loops / --recalibrate-loops - Force loop count calibration
# --calibrate-warmups / --recalibrate-warmups - Force warmup calibration
# Output and format options:
# --output FILE - Output file for results
# --append - Append to output file instead of overwriting
# --json - JSON output format
# --csv - CSV output format# Basic Python statement timing
python -m pyperf timeit '[i*2 for i in range(1000)]'
# With setup code
python -m pyperf timeit -s 'import math' 'math.sqrt(2)'
# Multiple statements
python -m pyperf timeit -s 'x = list(range(100))' 'sorted(x)' 'x.sort()'
# Save results to file
python -m pyperf timeit --output results.json '[i for i in range(100)]'
# Rigorous benchmarking
python -m pyperf timeit --rigorous 'sum(range(1000))'
# Quick rough measurement
python -m pyperf timeit --fast --quiet 'len("hello world")'# Benchmark external command
python -m pyperf command python -c 'print("Hello World")'
# With custom name
python -m pyperf command --name "python_hello" python -c 'print("Hello")'
# Save results
python -m pyperf command --output cmd_results.json ls -la
# Benchmark with affinity
python -m pyperf command --affinity 0-3 python -c 'import math; print(math.pi)'# Show benchmark results
python -m pyperf show results.json
# Show with histogram
python -m pyperf hist results.json
# Detailed statistics
python -m pyperf stats results.json
# Show metadata
python -m pyperf metadata results.json
# Check benchmark stability
python -m pyperf check results.json
# Compare benchmarks
python -m pyperf compare_to reference.json current.json
# Show slowest benchmarks
python -m pyperf slowest results.json# Convert benchmark format
python -m pyperf convert --indent results.json formatted.json
# Extract specific benchmarks
python -m pyperf convert --include-benchmark "test_name" input.json output.json
python -m pyperf convert --exclude-benchmark "slow_test" input.json output.json
# Filter runs
python -m pyperf convert --include-runs 1-5,10 input.json output.json
python -m pyperf convert --exclude-runs 0,7-9 input.json output.json
# Metadata operations
python -m pyperf convert --extract-metadata cpu_model_name input.json
python -m pyperf convert --remove-all-metadata input.json clean.json
python -m pyperf convert --update-metadata environment=production input.json output.json
# Merge benchmarks
python -m pyperf convert --add results2.json results1.json combined.json
# Dump raw data
python -m pyperf dump results.json
# Dump in CSV format
python -m pyperf dump --csv results.json# Show system tuning recommendations
python -m pyperf system tune
# Collect system metadata
python -m pyperf collect_metadata
# Collect system metadata with output file
python -m pyperf collect_metadata --output metadata.json
# Show CPU information
python -m pyperf system show
# Check system configuration
python -m pyperf system check
# Reset system configuration
python -m pyperf system reset# High precision benchmarking with specific configuration
python -m pyperf timeit \
--processes 20 \
--values 10 \
--warmups 3 \
--min-time 0.2 \
--affinity 0-7 \
--output precise_results.json \
'sum(x*x for x in range(1000))'
# Memory tracking benchmark
python -m pyperf timeit \
--track-memory \
--tracemalloc \
'[i for i in range(10000)]'
# Profile-enabled benchmark
python -m pyperf timeit \
--profile \
-s 'import random; data = [random.random() for _ in range(1000)]' \
'sorted(data)'
# Compare multiple implementations
python -m pyperf timeit --output impl1.json 'list(map(str, range(100)))'
python -m pyperf timeit --output impl2.json '[str(i) for i in range(100)]'
python -m pyperf compare_to impl1.json impl2.json
# Batch benchmarking with different configurations
python -m pyperf timeit --name "small_list" '[i for i in range(10)]'
python -m pyperf timeit --name "medium_list" --append '[i for i in range(100)]'
python -m pyperf timeit --name "large_list" --append '[i for i in range(1000)]'The CLI can be integrated into Python scripts for automated benchmarking:
import subprocess
import pyperf
# Run CLI command from Python
result = subprocess.run([
'python', '-m', 'pyperf', 'timeit',
'--output', 'temp_results.json',
'--quiet',
'sum(range(100))'
], capture_output=True, text=True)
if result.returncode == 0:
# Load and analyze results
benchmark = pyperf.Benchmark.load('temp_results.json')
print(f"Mean: {benchmark.mean:.6f} seconds")PyPerf CLI supports multiple output formats for integration with analysis tools:
# Human-readable output (default)
python -m pyperf show results.json
# JSON output for programmatic processing
python -m pyperf show --json results.json
# CSV output for spreadsheet analysis
python -m pyperf dump --csv results.json
# Quiet output for scripting
python -m pyperf timeit --quiet 'len([1,2,3])'
# Verbose output for debugging
python -m pyperf timeit --verbose 'math.sqrt(2)'Install with Tessl CLI
npx tessl i tessl/pypi-pyperf