or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

benchmark-execution.mdcli.mddata-management.mdindex.mdutilities.md

index.mddocs/

0

# PyPerf

1

2

A comprehensive Python toolkit for writing, running, and analyzing benchmarks with high precision and statistical rigor. PyPerf provides automated benchmark execution with multiple worker processes, statistical analysis with outlier detection, and comprehensive system metadata collection for reproducible performance measurements.

3

4

## Package Information

5

6

- **Package Name**: pyperf

7

- **Language**: Python

8

- **Installation**: `pip install pyperf`

9

- **Dependencies**: `psutil>=5.9.0`

10

11

## Core Imports

12

13

```python

14

import pyperf

15

```

16

17

Common usage patterns:

18

19

```python

20

from pyperf import Runner, Benchmark, BenchmarkSuite

21

```

22

23

## Basic Usage

24

25

```python

26

import pyperf

27

28

# Create a benchmark runner

29

runner = pyperf.Runner()

30

31

# Simple function benchmarking

32

def fibonacci(n):

33

if n < 2:

34

return n

35

return fibonacci(n-1) + fibonacci(n-2)

36

37

# Benchmark the function

38

benchmark = runner.bench_func('fibonacci', fibonacci, 20)

39

print(f"fibonacci(20): {benchmark}")

40

41

# Python code benchmarking (timeit-style)

42

benchmark = runner.timeit('list_creation',

43

stmt='[i for i in range(100)]')

44

print(f"List creation: {benchmark}")

45

46

# Save results to file

47

benchmark.dump('fibonacci_results.json')

48

49

# Load and compare results later

50

loaded = pyperf.Benchmark.load('fibonacci_results.json')

51

print(f"Mean: {loaded.mean():.6f} seconds")

52

print(f"Standard deviation: {loaded.stdev():.6f} seconds")

53

```

54

55

## Architecture

56

57

PyPerf follows a layered architecture designed for precision and statistical validity:

58

59

- **Runner**: High-level interface managing worker processes and benchmark execution

60

- **Benchmark/BenchmarkSuite**: Data containers with statistical analysis capabilities

61

- **Run**: Individual measurement collections with metadata

62

- **Worker Processes**: Isolated execution environments minimizing measurement noise

63

- **Metadata Collection**: Comprehensive system state capture for reproducible results

64

- **Statistical Analysis**: Built-in outlier detection, confidence intervals, and significance testing

65

66

This design enables PyPerf to achieve microsecond-level precision while maintaining statistical rigor through automated calibration, multi-process execution, and comprehensive metadata tracking.

67

68

## Capabilities

69

70

### Benchmark Execution

71

72

Core benchmarking functionality including function timing, Python code evaluation, async function benchmarking, and external command measurement. The Runner class provides the primary interface for executing benchmarks with automatic calibration and statistical validation.

73

74

```python { .api }

75

class Runner:

76

def __init__(self, values=None, processes=None, loops=0, min_time=0.1,

77

metadata=None, show_name=True, program_args=None,

78

add_cmdline_args=None, _argparser=None, warmups=1): ...

79

def bench_func(self, name: str, func: callable, *args, **kwargs) -> Benchmark: ...

80

def bench_time_func(self, name: str, time_func: callable, *args, **kwargs) -> Benchmark: ...

81

def bench_async_func(self, name: str, func: callable, *args, **kwargs) -> Benchmark: ...

82

def timeit(self, name: str, stmt=None, setup="pass", teardown="pass",

83

inner_loops=None, duplicate=None, metadata=None, globals=None) -> Benchmark: ...

84

def bench_command(self, name: str, command: list) -> Benchmark: ...

85

```

86

87

[Benchmark Execution](./benchmark-execution.md)

88

89

### Data Management and Analysis

90

91

Comprehensive data structures for storing, analyzing, and managing benchmark results. Includes statistical analysis, metadata handling, and serialization capabilities for persistent storage and result sharing.

92

93

```python { .api }

94

class Run:

95

def __init__(self, values, warmups=None, metadata=None, collect_metadata=True): ...

96

def get_metadata(self) -> dict: ...

97

def get_loops(self) -> int: ...

98

def get_total_loops(self) -> int: ...

99

100

class Benchmark:

101

def __init__(self, runs): ...

102

def get_name(self) -> str: ...

103

def get_values(self) -> tuple: ...

104

def get_nrun(self) -> int: ...

105

def mean(self) -> float: ...

106

def stdev(self) -> float: ...

107

def median(self) -> float: ...

108

def percentile(self, p: float) -> float: ...

109

def add_run(self, run: Run): ...

110

@staticmethod

111

def load(file): ...

112

def dump(self, file, compact=True, replace=False): ...

113

114

class BenchmarkSuite:

115

def __init__(self, benchmarks, filename=None): ...

116

def get_benchmarks(self) -> list: ...

117

def get_benchmark(self, name: str) -> Benchmark: ...

118

def add_benchmark(self, benchmark: Benchmark): ...

119

```

120

121

[Data Management](./data-management.md)

122

123

### Command-Line Interface

124

125

Comprehensive command-line tools for running benchmarks, analyzing results, and managing benchmark data. Includes specialized commands for Python code timing, external command benchmarking, result comparison, and system optimization recommendations.

126

127

```python { .api }

128

# Available via: python -m pyperf <command>

129

# Primary commands:

130

# timeit - Python code benchmarking

131

# command - External command benchmarking

132

# show - Display benchmark results

133

# compare_to - Compare benchmark files

134

# stats - Detailed statistics

135

# hist - Result histograms

136

# metadata - System metadata display

137

# check - Stability validation

138

# convert - Data format conversion

139

# system - System tuning recommendations

140

```

141

142

[Command-Line Interface](./cli.md)

143

144

### Utilities and System Integration

145

146

System utilities for metadata collection, platform detection, statistical functions, and performance optimization. Includes CPU affinity management, memory tracking, and environment analysis capabilities.

147

148

```python { .api }

149

def python_implementation() -> str: ...

150

def python_has_jit() -> bool: ...

151

def format_metadata(name: str, value) -> str: ...

152

def add_runs(filename: str, result): ...

153

154

# Version information

155

VERSION: tuple

156

__version__: str

157

```

158

159

[Utilities](./utilities.md)