0
# Basic Benchmarking
1
2
Core benchmarking functionality using pytest markers and fixtures. pytest-codspeed provides two primary approaches for measuring performance: decorating entire test functions or using the benchmark fixture for targeted measurement.
3
4
## Capabilities
5
6
### Benchmark Markers
7
8
Decorators that automatically measure the execution time of entire test functions. When pytest-codspeed is enabled, only functions marked with these decorators will be executed.
9
10
```python { .api }
11
@pytest.mark.benchmark
12
def test_function():
13
"""Mark entire test function for benchmarking."""
14
...
15
16
@pytest.mark.codspeed_benchmark
17
def test_function():
18
"""CodSpeed-specific benchmark marker."""
19
...
20
```
21
22
#### Usage Example
23
24
```python
25
import pytest
26
27
@pytest.mark.benchmark
28
def test_list_comprehension():
29
# The entire function execution time is measured
30
data = [x * x for x in range(1000)]
31
assert len(data) == 1000
32
33
@pytest.mark.benchmark
34
def test_generator_expression():
35
data = list(x * x for x in range(1000))
36
assert len(data) == 1000
37
```
38
39
### Benchmark Fixture
40
41
Provides precise control over what code gets measured by wrapping specific function calls. The fixture can only be used once per test function.
42
43
```python { .api }
44
def benchmark(target: Callable[..., T], *args, **kwargs) -> T:
45
"""
46
Execute and measure the performance of target function.
47
48
Parameters:
49
- target: Function to benchmark
50
- *args: Positional arguments to pass to target
51
- **kwargs: Keyword arguments to pass to target
52
53
Returns:
54
The return value of target function
55
56
Raises:
57
RuntimeError: If benchmark fixture is used more than once per test
58
"""
59
```
60
61
#### Usage Example
62
63
```python
64
def test_sorting_performance(benchmark):
65
import random
66
data = [random.randint(1, 1000) for _ in range(1000)]
67
68
# Only the sort operation is measured
69
result = benchmark(sorted, data)
70
assert len(result) == 1000
71
72
def test_with_arguments(benchmark):
73
def calculate_sum(numbers, multiplier=1):
74
return sum(x * multiplier for x in numbers)
75
76
data = list(range(100))
77
result = benchmark(calculate_sum, data, multiplier=2)
78
assert result == 9900
79
```
80
81
### CodSpeed Benchmark Fixture
82
83
Alternative fixture name that provides identical functionality to the `benchmark` fixture.
84
85
```python { .api }
86
def codspeed_benchmark(target: Callable[..., T], *args, **kwargs) -> T:
87
"""
88
CodSpeed-specific benchmark fixture with identical functionality to benchmark.
89
90
Parameters:
91
- target: Function to benchmark
92
- *args: Positional arguments to pass to target
93
- **kwargs: Keyword arguments to pass to target
94
95
Returns:
96
The return value of target function
97
"""
98
```
99
100
## Fixture Compatibility
101
102
When pytest-codspeed is enabled, it automatically:
103
104
- Replaces the `benchmark` fixture from pytest-benchmark if installed
105
- Disables pytest-benchmark plugin to prevent conflicts
106
- Disables pytest-speed plugin to prevent conflicts
107
- Archives the original benchmark fixture as `__benchmark` for potential access
108
109
## Measurement Behavior
110
111
### When CodSpeed is Disabled
112
- Marker-decorated functions execute normally without measurement
113
- Benchmark fixtures execute the target function without measurement overhead
114
115
### When CodSpeed is Enabled
116
- Only benchmark-marked functions or tests using benchmark fixtures are executed
117
- Other test functions are automatically deselected
118
- Actual measurement occurs using the configured instrument (walltime or instrumentation mode)
119
120
## Error Handling
121
122
```python
123
# This will raise RuntimeError
124
def test_multiple_benchmark_calls(benchmark):
125
result1 = benchmark(sum, [1, 2, 3]) # First call - OK
126
result2 = benchmark(max, [1, 2, 3]) # Second call - Raises RuntimeError
127
```
128
129
The benchmark fixture enforces single-use per test to ensure measurement accuracy and prevent confusion about which operation is being benchmarked.