0
# Command-Line Interface
1
2
Comprehensive command-line tools for running benchmarks, analyzing results, and managing benchmark data. PyPerf provides a full-featured CLI accessible via `python -m pyperf` with specialized commands for different benchmarking scenarios.
3
4
## Capabilities
5
6
### Primary Commands
7
8
The PyPerf CLI provides extensive command-line functionality through various subcommands.
9
10
```python { .api }
11
# Command-line interface: python -m pyperf <command> [options]
12
13
# Core benchmarking commands:
14
# timeit - Python code benchmarking (similar to timeit module)
15
# command - External command benchmarking
16
17
# Analysis and display commands:
18
# show - Display benchmark results with formatting options
19
# compare_to - Compare multiple benchmark files
20
# stats - Show detailed statistics
21
# hist - Display histogram of measurement values
22
# metadata - Show benchmark metadata
23
# check - Validate benchmark stability and detect issues
24
25
# Data management commands:
26
# convert - Modify and transform benchmark files
27
# dump - Display raw benchmark data in various formats
28
# slowest - List benchmarks sorted by execution time
29
30
# System commands:
31
# system - System configuration recommendations for benchmarking
32
# collect_metadata - Gather comprehensive system metadata
33
```
34
35
### TimeitRunner Class
36
37
Specialized runner for timeit-style Python code benchmarking with command-line integration.
38
39
```python { .api }
40
class TimeitRunner(pyperf.Runner):
41
"""Command-line interface for Python code benchmarking."""
42
43
def __init__(self):
44
"""Initialize timeit runner with CLI argument parsing."""
45
```
46
47
### CommandRunner Class
48
49
Command-line wrapper for external command benchmarking.
50
51
```python { .api }
52
class CommandRunner(pyperf.Runner):
53
"""Command-line interface for external command benchmarking."""
54
55
def __init__(self, cmd):
56
"""
57
Initialize command runner.
58
59
Args:
60
cmd: Argument parser for command configuration
61
"""
62
```
63
64
### Common CLI Options
65
66
Standard options available across most commands:
67
68
```python { .api }
69
# Execution control options:
70
# --rigorous - More thorough benchmarking (more processes/values)
71
# --fast - Quick rough measurements (fewer processes/values)
72
# --debug-single-value - Debug mode with single measurement
73
# --quiet - Suppress warnings and verbose output
74
# --verbose - Enable detailed output and progress information
75
76
# Process and measurement options:
77
# --processes N - Number of worker processes
78
# --values N - Number of measurements per process
79
# --loops N - Number of loops per measurement
80
# --warmups N - Number of warmup iterations
81
# --min-time SECONDS - Minimum measurement duration
82
83
# System optimization options:
84
# --affinity CPUS - CPU affinity for worker processes (e.g., "0-3" or "0,2,4")
85
# --tracemalloc - Enable memory allocation tracking
86
# --track-memory - Track memory usage during benchmarks
87
# --profile - Collect cProfile profiling data
88
# --inherit-environ - Worker processes inherit environment variables
89
# --copy-env - Copy environment to worker processes
90
91
# Process management options:
92
# --timeout SECONDS - Maximum benchmark execution time
93
# --worker - Run in worker process mode (internal)
94
# --worker-task - Worker task identifier (internal)
95
# --pipe FD - Pipe output to file descriptor
96
97
# Calibration options:
98
# --calibrate-loops / --recalibrate-loops - Force loop count calibration
99
# --calibrate-warmups / --recalibrate-warmups - Force warmup calibration
100
101
# Output and format options:
102
# --output FILE - Output file for results
103
# --append - Append to output file instead of overwriting
104
# --json - JSON output format
105
# --csv - CSV output format
106
```
107
108
## Usage Examples
109
110
### Python Code Benchmarking (timeit command)
111
112
```bash
113
# Basic Python statement timing
114
python -m pyperf timeit '[i*2 for i in range(1000)]'
115
116
# With setup code
117
python -m pyperf timeit -s 'import math' 'math.sqrt(2)'
118
119
# Multiple statements
120
python -m pyperf timeit -s 'x = list(range(100))' 'sorted(x)' 'x.sort()'
121
122
# Save results to file
123
python -m pyperf timeit --output results.json '[i for i in range(100)]'
124
125
# Rigorous benchmarking
126
python -m pyperf timeit --rigorous 'sum(range(1000))'
127
128
# Quick rough measurement
129
python -m pyperf timeit --fast --quiet 'len("hello world")'
130
```
131
132
### External Command Benchmarking (command)
133
134
```bash
135
# Benchmark external command
136
python -m pyperf command python -c 'print("Hello World")'
137
138
# With custom name
139
python -m pyperf command --name "python_hello" python -c 'print("Hello")'
140
141
# Save results
142
python -m pyperf command --output cmd_results.json ls -la
143
144
# Benchmark with affinity
145
python -m pyperf command --affinity 0-3 python -c 'import math; print(math.pi)'
146
```
147
148
### Result Analysis and Display
149
150
```bash
151
# Show benchmark results
152
python -m pyperf show results.json
153
154
# Show with histogram
155
python -m pyperf hist results.json
156
157
# Detailed statistics
158
python -m pyperf stats results.json
159
160
# Show metadata
161
python -m pyperf metadata results.json
162
163
# Check benchmark stability
164
python -m pyperf check results.json
165
166
# Compare benchmarks
167
python -m pyperf compare_to reference.json current.json
168
169
# Show slowest benchmarks
170
python -m pyperf slowest results.json
171
```
172
173
### Data Management
174
175
```bash
176
# Convert benchmark format
177
python -m pyperf convert --indent results.json formatted.json
178
179
# Extract specific benchmarks
180
python -m pyperf convert --include-benchmark "test_name" input.json output.json
181
python -m pyperf convert --exclude-benchmark "slow_test" input.json output.json
182
183
# Filter runs
184
python -m pyperf convert --include-runs 1-5,10 input.json output.json
185
python -m pyperf convert --exclude-runs 0,7-9 input.json output.json
186
187
# Metadata operations
188
python -m pyperf convert --extract-metadata cpu_model_name input.json
189
python -m pyperf convert --remove-all-metadata input.json clean.json
190
python -m pyperf convert --update-metadata environment=production input.json output.json
191
192
# Merge benchmarks
193
python -m pyperf convert --add results2.json results1.json combined.json
194
195
# Dump raw data
196
python -m pyperf dump results.json
197
198
# Dump in CSV format
199
python -m pyperf dump --csv results.json
200
```
201
202
### System Configuration
203
204
```bash
205
# Show system tuning recommendations
206
python -m pyperf system tune
207
208
# Collect system metadata
209
python -m pyperf collect_metadata
210
211
# Collect system metadata with output file
212
python -m pyperf collect_metadata --output metadata.json
213
214
# Show CPU information
215
python -m pyperf system show
216
217
# Check system configuration
218
python -m pyperf system check
219
220
# Reset system configuration
221
python -m pyperf system reset
222
```
223
224
### Advanced Usage Examples
225
226
```bash
227
# High precision benchmarking with specific configuration
228
python -m pyperf timeit \
229
--processes 20 \
230
--values 10 \
231
--warmups 3 \
232
--min-time 0.2 \
233
--affinity 0-7 \
234
--output precise_results.json \
235
'sum(x*x for x in range(1000))'
236
237
# Memory tracking benchmark
238
python -m pyperf timeit \
239
--track-memory \
240
--tracemalloc \
241
'[i for i in range(10000)]'
242
243
# Profile-enabled benchmark
244
python -m pyperf timeit \
245
--profile \
246
-s 'import random; data = [random.random() for _ in range(1000)]' \
247
'sorted(data)'
248
249
# Compare multiple implementations
250
python -m pyperf timeit --output impl1.json 'list(map(str, range(100)))'
251
python -m pyperf timeit --output impl2.json '[str(i) for i in range(100)]'
252
python -m pyperf compare_to impl1.json impl2.json
253
254
# Batch benchmarking with different configurations
255
python -m pyperf timeit --name "small_list" '[i for i in range(10)]'
256
python -m pyperf timeit --name "medium_list" --append '[i for i in range(100)]'
257
python -m pyperf timeit --name "large_list" --append '[i for i in range(1000)]'
258
```
259
260
### Integration with Scripts
261
262
The CLI can be integrated into Python scripts for automated benchmarking:
263
264
```python
265
import subprocess
266
import pyperf
267
268
# Run CLI command from Python
269
result = subprocess.run([
270
'python', '-m', 'pyperf', 'timeit',
271
'--output', 'temp_results.json',
272
'--quiet',
273
'sum(range(100))'
274
], capture_output=True, text=True)
275
276
if result.returncode == 0:
277
# Load and analyze results
278
benchmark = pyperf.Benchmark.load('temp_results.json')
279
print(f"Mean: {benchmark.mean:.6f} seconds")
280
```
281
282
### Output Formats
283
284
PyPerf CLI supports multiple output formats for integration with analysis tools:
285
286
```bash
287
# Human-readable output (default)
288
python -m pyperf show results.json
289
290
# JSON output for programmatic processing
291
python -m pyperf show --json results.json
292
293
# CSV output for spreadsheet analysis
294
python -m pyperf dump --csv results.json
295
296
# Quiet output for scripting
297
python -m pyperf timeit --quiet 'len([1,2,3])'
298
299
# Verbose output for debugging
300
python -m pyperf timeit --verbose 'math.sqrt(2)'
301
```