0
# Utilities and System Integration
1
2
System utilities for metadata collection, platform detection, statistical functions, and performance optimization. Includes CPU affinity management, memory tracking, and environment analysis capabilities for comprehensive benchmarking support.
3
4
## Capabilities
5
6
### Platform Detection
7
8
Utility functions for detecting Python implementation characteristics and platform-specific features.
9
10
```python { .api }
11
def python_implementation() -> str:
12
"""
13
Get Python implementation name.
14
15
Returns:
16
Implementation name (e.g., 'CPython', 'PyPy', 'Jython')
17
"""
18
19
def python_has_jit() -> bool:
20
"""
21
Check if Python implementation has JIT compilation.
22
23
Returns:
24
True if implementation includes JIT (e.g., PyPy), False otherwise
25
"""
26
```
27
28
### High-Precision Timing
29
30
Re-exported high-precision timer for consistent timing across platforms.
31
32
```python { .api }
33
perf_counter: callable
34
"""
35
High-precision timer function (re-exported from time.perf_counter).
36
37
Returns monotonic time in seconds as float with highest available resolution.
38
Used internally by PyPerf for all timing measurements.
39
"""
40
```
41
42
### Metadata Formatting
43
44
Functions for formatting and displaying benchmark metadata.
45
46
```python { .api }
47
def format_metadata(name: str, value) -> str:
48
"""
49
Format metadata value for human-readable display.
50
51
Args:
52
name: Metadata field name
53
value: Metadata value to format
54
55
Returns:
56
Formatted string representation of the metadata value
57
"""
58
```
59
60
### Data File Management
61
62
Utility functions for managing benchmark data files and merging results.
63
64
```python { .api }
65
def add_runs(filename: str, result):
66
"""
67
Add benchmark results to existing JSON file.
68
69
Args:
70
filename: Path to existing benchmark JSON file
71
result: Benchmark or BenchmarkSuite object to merge into file
72
73
Note:
74
Creates new file if it doesn't exist. Merges runs if benchmark
75
names match, otherwise adds as new benchmark to suite.
76
"""
77
78
def perf_counter() -> float:
79
"""
80
High-precision timer function (re-exported from time.perf_counter).
81
82
Returns:
83
Monotonic time in seconds as float with highest available resolution.
84
85
Note:
86
Maintained for backward compatibility with pyperf 1.7.
87
Use time.perf_counter() directly in new code.
88
"""
89
```
90
91
### Version Information
92
93
Constants providing version information for the PyPerf package.
94
95
```python { .api }
96
VERSION: tuple
97
"""Version tuple (major, minor, patch) - currently (2, 9, 0)"""
98
99
__version__: str
100
"""Version string - currently '2.9.0'"""
101
```
102
103
### Error Handling
104
105
Exception classes for different types of errors in PyPerf operations.
106
107
```python { .api }
108
# Import from respective modules:
109
# from pyperf._runner import CLIError
110
# from pyperf._compare import CompareError
111
# from pyperf._hooks import HookError
112
113
class CLIError(Exception):
114
"""Exception raised for command-line interface errors."""
115
116
class CompareError(Exception):
117
"""Exception raised during benchmark comparison operations."""
118
119
class HookError(Exception):
120
"""Exception raised by the hook system."""
121
```
122
123
### Hook System
124
125
Advanced integration system for extending PyPerf with custom functionality.
126
127
```python { .api }
128
def get_hook_names() -> list:
129
"""
130
Get list of available hook names.
131
132
Returns:
133
List of available hook names from entry points
134
"""
135
136
def get_selected_hooks(hook_names: list) -> list:
137
"""
138
Get specific hooks by name.
139
140
Args:
141
hook_names: List of hook names to retrieve
142
143
Returns:
144
List of hook classes
145
"""
146
147
def instantiate_selected_hooks(hook_names: list) -> list:
148
"""
149
Create instances of selected hooks.
150
151
Args:
152
hook_names: List of hook names to instantiate
153
154
Returns:
155
List of instantiated hook objects
156
"""
157
158
# Built-in hooks available through entry points:
159
# perf_record - Linux perf integration for detailed performance analysis
160
# pystats - Python internal statistics collection
161
# _test_hook - Testing and development hook
162
163
# Hook usage in CLI:
164
# python -m pyperf timeit --hook perf_record 'sum(range(100))'
165
# python -m pyperf timeit --hook pystats 'list(range(100))'
166
```
167
168
### Constants and Type Definitions
169
170
Internal constants used throughout PyPerf for consistency and configuration.
171
172
```python { .api }
173
# Default measurement unit
174
DEFAULT_UNIT: str = 'second'
175
176
# Valid numeric types for measurements
177
NUMBER_TYPES: tuple = (int, float)
178
179
# Valid metadata value types
180
METADATA_VALUE_TYPES: tuple = (int, str, float)
181
182
# JSON format version for file compatibility
183
_JSON_VERSION: str = '1.0'
184
185
# Metadata fields checked for consistency across runs
186
_CHECKED_METADATA: tuple = (
187
'aslr', 'cpu_count', 'cpu_model_name', 'hostname', 'inner_loops',
188
'name', 'platform', 'python_executable', 'python_implementation',
189
'python_unicode', 'python_version', 'unit'
190
)
191
```
192
193
## Usage Examples
194
195
### Platform Detection
196
197
```python
198
import pyperf
199
200
# Check Python implementation
201
impl = pyperf.python_implementation()
202
print(f"Running on: {impl}") # e.g., "CPython" or "PyPy"
203
204
# Optimize based on JIT availability
205
if pyperf.python_has_jit():
206
# PyPy or other JIT implementations
207
runner = pyperf.Runner(values=10, processes=6)
208
else:
209
# CPython - use more processes, fewer values per process
210
runner = pyperf.Runner(values=3, processes=20)
211
```
212
213
### High-Precision Timing
214
215
```python
216
import pyperf
217
218
# Use PyPerf's timer directly
219
start = pyperf.perf_counter()
220
# ... code to measure ...
221
end = pyperf.perf_counter()
222
elapsed = end - start
223
print(f"Elapsed: {elapsed:.9f} seconds")
224
225
# This is the same timer used internally by PyPerf
226
# for all benchmark measurements
227
```
228
229
### Metadata Handling
230
231
```python
232
import pyperf
233
234
# Load benchmark and examine metadata
235
benchmark = pyperf.Benchmark.load('results.json')
236
metadata = benchmark.get_metadata()
237
238
# Format metadata for display
239
for name, value in metadata.items():
240
formatted = pyperf.format_metadata(name, value)
241
print(f"{name}: {formatted}")
242
243
# Common metadata fields include:
244
# - python_version: Python version string
245
# - python_implementation: Implementation name
246
# - platform: Operating system and architecture
247
# - cpu_model_name: CPU model information
248
# - hostname: System hostname
249
# - date: Benchmark execution timestamp
250
```
251
252
### File Management
253
254
```python
255
import pyperf
256
257
# Create initial benchmark
258
runner = pyperf.Runner()
259
bench1 = runner.timeit('test1', 'sum(range(100))')
260
bench1.dump('results.json')
261
262
# Add more results to the same file
263
bench2 = runner.timeit('test2', 'list(range(100))')
264
pyperf.add_runs('results.json', bench2)
265
266
# Load the combined results
267
suite = pyperf.BenchmarkSuite.load('results.json')
268
print(f"Benchmarks: {suite.get_benchmark_names()}") # ['test1', 'test2']
269
```
270
271
### Version Information
272
273
```python
274
import pyperf
275
276
# Check PyPerf version programmatically
277
print(f"PyPerf version: {pyperf.__version__}") # "2.9.0"
278
print(f"Version tuple: {pyperf.VERSION}") # (2, 9, 0)
279
280
# Version compatibility checking
281
major, minor, patch = pyperf.VERSION
282
if major >= 2 and minor >= 9:
283
print("Using modern PyPerf with latest features")
284
```
285
286
### Error Handling
287
288
```python
289
import pyperf
290
291
try:
292
runner = pyperf.Runner()
293
# This might raise CLIError if CLI arguments are invalid
294
runner.parse_args(['--invalid-option'])
295
except pyperf.CLIError as e:
296
print(f"CLI error: {e}")
297
298
try:
299
# This might raise CompareError if benchmarks are incompatible
300
bench1 = pyperf.Benchmark.load('results1.json')
301
bench2 = pyperf.Benchmark.load('results2.json')
302
bench1.add_runs(bench2) # Different units or metadata
303
except pyperf.CompareError as e:
304
print(f"Comparison error: {e}")
305
```
306
307
### Hook System Usage
308
309
```python
310
import pyperf
311
312
# Check available hooks
313
hooks = pyperf.get_hook_names()
314
print(f"Available hooks: {hooks}")
315
316
# Use hooks in Runner (advanced usage)
317
runner = pyperf.Runner()
318
# Hooks are typically configured via CLI: --hook perf_record
319
# Or through environment/configuration files
320
321
# Example of what hooks enable:
322
# - perf_record: Integrates with Linux perf for detailed CPU analysis
323
# - pystats: Collects Python internal statistics during benchmarking
324
# - Custom hooks: User-defined extensions for specialized measurements
325
```
326
327
### Advanced System Integration
328
329
```python
330
import pyperf
331
import os
332
333
# Environment-aware benchmarking
334
def create_optimized_runner():
335
"""Create runner optimized for current system."""
336
337
# Detect system characteristics
338
has_jit = pyperf.python_has_jit()
339
cpu_count = os.cpu_count()
340
341
# Optimize parameters
342
if has_jit:
343
# JIT implementations need more warmup
344
return pyperf.Runner(
345
values=10,
346
processes=min(6, cpu_count // 2),
347
warmups=3,
348
min_time=0.2
349
)
350
else:
351
# CPython benefits from more processes
352
return pyperf.Runner(
353
values=3,
354
processes=min(20, cpu_count),
355
warmups=1,
356
min_time=0.1
357
)
358
359
# Use in production
360
runner = create_optimized_runner()
361
benchmark = runner.timeit('optimized_test', 'sum(range(1000))')
362
```