or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

analysis-reporting.mdaspect-benchmarking.mdcli-tools.mdconfiguration.mdcore-benchmarking.mdindex.mdstorage-comparison.md

configuration.mddocs/

0

# Configuration and Customization

1

2

## Overview

3

4

pytest-benchmark provides extensive configuration options through pytest command-line arguments, configuration files, and test markers. You can control timing parameters, output formats, storage options, and benchmark behavior.

5

6

## Test Markers

7

8

### @pytest.mark.benchmark

9

10

```python { .api }

11

@pytest.mark.benchmark(**kwargs)

12

def test_function(benchmark):

13

"""

14

Mark a test with custom benchmark settings.

15

16

Supported kwargs:

17

max_time (float): Maximum time per test in seconds

18

min_rounds (int): Minimum number of rounds

19

min_time (float): Minimum time per round in seconds

20

timer (callable): Timer function to use

21

group (str): Benchmark group name

22

disable_gc (bool): Disable garbage collection

23

warmup (bool): Enable warmup rounds

24

warmup_iterations (int): Number of warmup iterations

25

calibration_precision (int): Calibration precision factor

26

cprofile (bool): Enable cProfile integration

27

"""

28

```

29

30

## Command-Line Options

31

32

### Timing Control

33

34

```bash { .api }

35

# Timing parameters

36

--benchmark-min-time SECONDS # Minimum time per round (default: 0.000005)

37

--benchmark-max-time SECONDS # Maximum time per test (default: 1.0)

38

--benchmark-min-rounds NUM # Minimum rounds (default: 5)

39

--benchmark-timer FUNC # Timer function (default: platform default)

40

--benchmark-calibration-precision NUM # Calibration precision (default: 10)

41

```

42

43

### Warmup Options

44

45

```bash { .api }

46

# Warmup configuration

47

--benchmark-warmup KIND # Warmup mode: auto/on/off (default: auto)

48

--benchmark-warmup-iterations NUM # Max warmup iterations (default: 100000)

49

```

50

51

### Execution Control

52

53

```bash { .api }

54

# Benchmark execution

55

--benchmark-disable-gc # Disable GC during benchmarks

56

--benchmark-skip # Skip benchmark tests

57

--benchmark-disable # Disable benchmarking (run once)

58

--benchmark-enable # Force enable benchmarks

59

--benchmark-only # Only run benchmark tests

60

```

61

62

### Display Options

63

64

```bash { .api }

65

# Result display

66

--benchmark-sort COL # Sort column: min/max/mean/stddev/name/fullname

67

--benchmark-group-by LABEL # Grouping: group/name/fullname/func/fullfunc/param

68

--benchmark-columns LABELS # Comma-separated column list

69

--benchmark-name FORMAT # Name format: short/normal/long/trial

70

--benchmark-verbose # Verbose output

71

--benchmark-quiet # Quiet mode

72

```

73

74

### Storage and Persistence

75

76

```bash { .api }

77

# Storage options

78

--benchmark-storage URI # Storage URI (default: file://./.benchmarks)

79

--benchmark-netrc FILE # Netrc file for credentials

80

--benchmark-save NAME # Save results with name

81

--benchmark-autosave # Auto-save with timestamp

82

--benchmark-save-data # Include timing data in saves

83

```

84

85

### Output Formats

86

87

```bash { .api }

88

# Export formats

89

--benchmark-json PATH # Export to JSON

90

--benchmark-csv FILENAME # Export to CSV

91

--benchmark-histogram FILENAME # Generate histograms

92

```

93

94

### Performance Analysis

95

96

```bash { .api }

97

# Profiling

98

--benchmark-cprofile COLUMN # Enable cProfile with sort column

99

--benchmark-cprofile-loops LOOPS # cProfile iteration count

100

--benchmark-cprofile-top COUNT # Number of profile rows to show

101

--benchmark-cprofile-dump PREFIX # Save cProfile dumps

102

```

103

104

### Comparison

105

106

```bash { .api }

107

# Result comparison

108

--benchmark-compare NUM # Compare against run number

109

--benchmark-compare-fail EXPR # Fail on performance regression

110

```

111

112

## Usage Examples

113

114

### Basic Marker Usage

115

116

```python

117

@pytest.mark.benchmark(group="string_ops", min_rounds=10)

118

def test_string_processing(benchmark):

119

def process_text(text):

120

return text.upper().replace(' ', '_')

121

122

result = benchmark(process_text, "hello world")

123

assert result == "HELLO_WORLD"

124

```

125

126

### Timing Customization

127

128

```python

129

@pytest.mark.benchmark(

130

max_time=2.0, # Run for up to 2 seconds

131

min_rounds=5, # At least 5 rounds

132

min_time=0.01 # Each round at least 10ms

133

)

134

def test_slow_operation(benchmark):

135

def slow_function():

136

time.sleep(0.01)

137

return sum(range(1000))

138

139

result = benchmark(slow_function)

140

assert result == 499500

141

```

142

143

### Custom Timer

144

145

```python

146

import time

147

148

@pytest.mark.benchmark(timer=time.process_time)

149

def test_cpu_intensive(benchmark):

150

def cpu_work():

151

return sum(x**2 for x in range(10000))

152

153

result = benchmark(cpu_work)

154

assert result == 333283335000

155

```

156

157

### Grouping and Organization

158

159

```python

160

@pytest.mark.benchmark(group="database")

161

def test_db_insert(benchmark):

162

def insert_data():

163

# Simulate database insert

164

return "inserted"

165

166

result = benchmark(insert_data)

167

assert result == "inserted"

168

169

@pytest.mark.benchmark(group="database")

170

def test_db_select(benchmark):

171

def select_data():

172

# Simulate database select

173

return ["row1", "row2"]

174

175

result = benchmark(select_data)

176

assert len(result) == 2

177

```

178

179

## Command-Line Usage Examples

180

181

### Basic Benchmarking

182

183

```bash

184

# Run all tests with benchmarks

185

pytest --benchmark-only

186

187

# Skip benchmark tests

188

pytest --benchmark-skip

189

190

# Run with custom timing

191

pytest --benchmark-min-rounds=10 --benchmark-max-time=2.0

192

```

193

194

### Result Display

195

196

```bash

197

# Sort by mean time, show specific columns

198

pytest --benchmark-sort=mean --benchmark-columns=min,max,mean,stddev

199

200

# Group by test function name

201

pytest --benchmark-group-by=func

202

203

# Verbose output with detailed timing

204

pytest --benchmark-verbose

205

```

206

207

### Saving and Comparison

208

209

```bash

210

# Save baseline results

211

pytest --benchmark-save=baseline

212

213

# Compare against baseline

214

pytest --benchmark-compare=baseline

215

216

# Auto-save with timestamp

217

pytest --benchmark-autosave

218

219

# Fail if performance regresses by more than 5%

220

pytest --benchmark-compare-fail=mean:5%

221

```

222

223

### Export Formats

224

225

```bash

226

# Export to multiple formats

227

pytest --benchmark-json=results.json \

228

--benchmark-csv=results.csv \

229

--benchmark-histogram=charts

230

231

# Generate cProfile data

232

pytest --benchmark-cprofile=cumtime \

233

--benchmark-cprofile-top=10 \

234

--benchmark-cprofile-dump=profiles

235

```

236

237

## Configuration Files

238

239

### pytest.ini Configuration

240

241

```ini

242

[tool:pytest]

243

addopts =

244

--benchmark-min-rounds=5

245

--benchmark-sort=min

246

--benchmark-group-by=group

247

--benchmark-columns=min,max,mean,stddev,median,ops,rounds

248

249

# Storage configuration

250

benchmark_storage = file://.benchmarks

251

benchmark_autosave = true

252

```

253

254

### pyproject.toml Configuration

255

256

```toml

257

[tool.pytest.ini_options]

258

addopts = [

259

"--benchmark-min-rounds=5",

260

"--benchmark-sort=min",

261

"--benchmark-group-by=group"

262

]

263

264

benchmark_storage = "file://.benchmarks"

265

benchmark_autosave = true

266

```

267

268

## Advanced Configuration

269

270

### Custom Timer Functions

271

272

```python

273

import time

274

275

def custom_timer():

276

"""Custom high-precision timer."""

277

return time.perf_counter()

278

279

@pytest.mark.benchmark(timer=custom_timer)

280

def test_with_custom_timer(benchmark):

281

result = benchmark(lambda: sum(range(1000)))

282

assert result == 499500

283

```

284

285

### Storage URI Formats

286

287

```bash

288

# File storage (default)

289

--benchmark-storage=file://.benchmarks

290

--benchmark-storage=file:///absolute/path/benchmarks

291

292

# Elasticsearch storage

293

--benchmark-storage=elasticsearch+http://localhost:9200/benchmarks/pytest

294

--benchmark-storage=elasticsearch+https://user:pass@host:9200/index/type

295

296

# With authentication

297

--benchmark-storage=elasticsearch+https://host:9200/index \

298

--benchmark-netrc=~/.netrc

299

```

300

301

### Environment Variables

302

303

```bash

304

# Override via environment

305

export PYTEST_BENCHMARK_DISABLE=1

306

export PYTEST_BENCHMARK_STORAGE="file:///tmp/benchmarks"

307

308

pytest # Uses environment settings

309

```

310

311

### Programmatic Configuration

312

313

```python

314

# conftest.py

315

def pytest_configure(config):

316

# Programmatic configuration

317

config.option.benchmark_min_rounds = 10

318

config.option.benchmark_sort = 'mean'

319

config.option.benchmark_group_by = 'group'

320

```

321

322

## Performance Regression Detection

323

324

### Threshold Expressions

325

326

```bash

327

# Fail if mean increases by more than 5%

328

--benchmark-compare-fail=mean:5%

329

330

# Fail if minimum time increases by more than 100ms

331

--benchmark-compare-fail=min:0.1

332

333

# Multiple thresholds

334

--benchmark-compare-fail=mean:10% --benchmark-compare-fail=max:20%

335

```

336

337

### Expression Formats

338

339

```python { .api }

340

# Supported comparison expressions:

341

"mean:5%" # Percentage increase in mean

342

"min:0.001" # Absolute increase in seconds

343

"max:10%" # Percentage increase in max

344

"stddev:50%" # Percentage increase in stddev

345

```

346

347

## Integration with CI/CD

348

349

### GitHub Actions Example

350

351

```yaml

352

- name: Run benchmarks

353

run: |

354

pytest --benchmark-only \

355

--benchmark-json=benchmark.json \

356

--benchmark-compare-fail=mean:10%

357

358

- name: Upload benchmark results

359

uses: actions/upload-artifact@v2

360

with:

361

name: benchmark-results

362

path: benchmark.json

363

```

364

365

### Jenkins Example

366

367

```bash

368

# Save baseline in master branch

369

pytest --benchmark-save=master --benchmark-json=master.json

370

371

# Compare feature branch

372

pytest --benchmark-compare=master \

373

--benchmark-compare-fail=mean:15% \

374

--benchmark-json=feature.json

375

```