0
# Retry Strategies
1
2
Comprehensive retry strategies for controlling timeout intervals between retry attempts. Each strategy implements different mathematical approaches to backoff timing, allowing fine-tuned control over retry behavior for various network conditions and failure scenarios.
3
4
## Capabilities
5
6
### Exponential Backoff Retry
7
8
Implements exponential backoff where timeout increases exponentially with each attempt. This is the default strategy and is recommended for most use cases as it provides good balance between responsiveness and avoiding server overload.
9
10
```python { .api }
11
class ExponentialRetry(RetryOptionsBase):
12
def __init__(
13
self,
14
attempts: int = 3,
15
start_timeout: float = 0.1,
16
max_timeout: float = 30.0,
17
factor: float = 2.0,
18
statuses: set[int] | None = None,
19
exceptions: set[type[Exception]] | None = None,
20
methods: set[str] | None = None,
21
retry_all_server_errors: bool = True,
22
evaluate_response_callback: EvaluateResponseCallbackType | None = None
23
): ...
24
25
def get_timeout(
26
self,
27
attempt: int,
28
response: ClientResponse | None = None
29
) -> float:
30
"""
31
Calculate exponential backoff timeout.
32
33
Args:
34
attempt (int): Current attempt number (1-based)
35
response (ClientResponse, optional): Response object from previous attempt
36
37
Returns:
38
float: Timeout in seconds, calculated as start_timeout * (factor ** attempt), capped at max_timeout
39
"""
40
```
41
42
Usage example:
43
44
```python
45
from aiohttp_retry import RetryClient, ExponentialRetry
46
47
retry_options = ExponentialRetry(
48
attempts=4,
49
start_timeout=0.5, # Start with 0.5s
50
max_timeout=10.0, # Cap at 10s
51
factor=2.0 # Double each time: 0.5s, 1s, 2s, 4s
52
)
53
54
async with RetryClient(retry_options=retry_options) as client:
55
response = await client.get('https://api.example.com/data')
56
```
57
58
### Random Timeout Retry
59
60
Generates random timeout intervals within specified bounds. Useful for preventing thundering herd problems when multiple clients might retry simultaneously.
61
62
```python { .api }
63
class RandomRetry(RetryOptionsBase):
64
def __init__(
65
self,
66
attempts: int = 3,
67
statuses: Iterable[int] | None = None,
68
exceptions: Iterable[type[Exception]] | None = None,
69
methods: Iterable[str] | None = None,
70
min_timeout: float = 0.1,
71
max_timeout: float = 3.0,
72
random_func: Callable[[], float] = random.random,
73
retry_all_server_errors: bool = True,
74
evaluate_response_callback: EvaluateResponseCallbackType | None = None
75
): ...
76
77
def get_timeout(
78
self,
79
attempt: int,
80
response: ClientResponse | None = None
81
) -> float:
82
"""
83
Generate random timeout between min and max bounds.
84
85
Args:
86
attempt (int): Current attempt number (ignored)
87
response (ClientResponse, optional): Response object (ignored)
88
89
Returns:
90
float: Random timeout between min_timeout and max_timeout
91
"""
92
```
93
94
Usage example:
95
96
```python
97
from aiohttp_retry import RetryClient, RandomRetry
98
99
retry_options = RandomRetry(
100
attempts=5,
101
min_timeout=1.0, # Minimum 1 second
102
max_timeout=5.0 # Maximum 5 seconds
103
)
104
105
async with RetryClient(retry_options=retry_options) as client:
106
response = await client.get('https://api.example.com/data')
107
```
108
109
### Predefined Timeout List Retry
110
111
Uses a predefined list of timeout values, cycling through them in order. Provides complete control over timeout progression and is useful when you have specific timing requirements.
112
113
```python { .api }
114
class ListRetry(RetryOptionsBase):
115
def __init__(
116
self,
117
timeouts: list[float],
118
statuses: Iterable[int] | None = None,
119
exceptions: Iterable[type[Exception]] | None = None,
120
methods: Iterable[str] | None = None,
121
retry_all_server_errors: bool = True,
122
evaluate_response_callback: EvaluateResponseCallbackType | None = None
123
):
124
"""
125
Initialize ListRetry with predefined timeout values.
126
127
The number of attempts is automatically set to len(timeouts).
128
Each retry will use the corresponding timeout from the list.
129
130
Args:
131
timeouts: List of timeout values in seconds for each retry attempt
132
"""
133
134
def get_timeout(
135
self,
136
attempt: int,
137
response: ClientResponse | None = None
138
) -> float:
139
"""
140
Return timeout from predefined list.
141
142
Args:
143
attempt (int): Current attempt number, used as index into timeouts list
144
response (ClientResponse, optional): Response object (ignored)
145
146
Returns:
147
float: Timeout from timeouts[attempt], since attempts is set to len(timeouts)
148
"""
149
```
150
151
Usage example:
152
153
```python
154
from aiohttp_retry import RetryClient, ListRetry
155
156
# Custom timeout sequence: quick, medium, slow, very slow
157
retry_options = ListRetry(
158
timeouts=[0.5, 2.0, 5.0, 10.0]
159
)
160
161
async with RetryClient(retry_options=retry_options) as client:
162
response = await client.get('https://api.example.com/data')
163
```
164
165
### Fibonacci Sequence Retry
166
167
Implements Fibonacci-based timeout progression where each timeout is the sum of the two preceding timeouts. Provides a middle ground between linear and exponential growth.
168
169
```python { .api }
170
class FibonacciRetry(RetryOptionsBase):
171
def __init__(
172
self,
173
attempts: int = 3,
174
multiplier: float = 1.0,
175
statuses: Iterable[int] | None = None,
176
exceptions: Iterable[type[Exception]] | None = None,
177
methods: Iterable[str] | None = None,
178
max_timeout: float = 3.0,
179
retry_all_server_errors: bool = True,
180
evaluate_response_callback: EvaluateResponseCallbackType | None = None
181
): ...
182
183
def get_timeout(
184
self,
185
attempt: int,
186
response: ClientResponse | None = None
187
) -> float:
188
"""
189
Calculate Fibonacci-based timeout.
190
191
Args:
192
attempt (int): Current attempt number (ignored, uses internal state)
193
response (ClientResponse, optional): Response object (ignored)
194
195
Returns:
196
float: Timeout following Fibonacci sequence * multiplier, capped at max_timeout
197
"""
198
```
199
200
Usage example:
201
202
```python
203
from aiohttp_retry import RetryClient, FibonacciRetry
204
205
retry_options = FibonacciRetry(
206
attempts=6,
207
multiplier=0.5, # Scale down the sequence
208
max_timeout=15.0 # Cap at 15 seconds
209
)
210
# Timeout sequence: 0.5s, 0.5s, 1.0s, 1.5s, 2.5s, 4.0s
211
212
async with RetryClient(retry_options=retry_options) as client:
213
response = await client.get('https://api.example.com/data')
214
```
215
216
### Exponential Retry with Jitter
217
218
Combines exponential backoff with random jitter to prevent synchronized retry attempts across multiple clients. Helps avoid thundering herd problems while maintaining exponential backoff benefits.
219
220
```python { .api }
221
class JitterRetry(ExponentialRetry):
222
def __init__(
223
self,
224
attempts: int = 3,
225
start_timeout: float = 0.1,
226
max_timeout: float = 30.0,
227
factor: float = 2.0,
228
statuses: set[int] | None = None,
229
exceptions: set[type[Exception]] | None = None,
230
methods: set[str] | None = None,
231
random_interval_size: float = 2.0,
232
retry_all_server_errors: bool = True,
233
evaluate_response_callback: EvaluateResponseCallbackType | None = None
234
): ...
235
236
def get_timeout(
237
self,
238
attempt: int,
239
response: ClientResponse | None = None
240
) -> float:
241
"""
242
Calculate exponential backoff with random jitter.
243
244
Formula: base_exponential_timeout + (random(0, random_interval_size) ** factor)
245
Where base_exponential_timeout = start_timeout * (factor ** attempt), capped at max_timeout
246
247
Args:
248
attempt (int): Current attempt number (1-based)
249
response (ClientResponse, optional): Response object (ignored)
250
251
Returns:
252
float: Exponential timeout + random jitter component
253
"""
254
```
255
256
Usage example:
257
258
```python
259
from aiohttp_retry import RetryClient, JitterRetry
260
261
retry_options = JitterRetry(
262
attempts=4,
263
start_timeout=1.0,
264
factor=2.0,
265
random_interval_size=3.0 # Add up to 3 seconds of random jitter
266
)
267
268
async with RetryClient(retry_options=retry_options) as client:
269
response = await client.get('https://api.example.com/data')
270
```
271
272
### Deprecated Retry Options
273
274
```python { .api }
275
def RetryOptions(*args, **kwargs) -> ExponentialRetry:
276
"""
277
Deprecated alias for ExponentialRetry.
278
279
This function is deprecated and will be removed in a future version.
280
Use ExponentialRetry directly instead.
281
282
Returns:
283
ExponentialRetry: An ExponentialRetry instance with the provided arguments
284
"""
285
```
286
287
## Common Configuration Options
288
289
All retry strategies support these common configuration parameters:
290
291
- **attempts** (int): Maximum number of retry attempts (default: 3)
292
- **statuses** (Iterable[int]): HTTP status codes that should trigger retries (default: None, relies on retry_all_server_errors)
293
- **exceptions** (Iterable[type[Exception]]): Exception types that should trigger retries (default: None, retries all exceptions)
294
- **methods** (Iterable[str]): HTTP methods that support retries (default: all methods)
295
- **retry_all_server_errors** (bool): Whether to retry all 5xx status codes (default: True)
296
- **evaluate_response_callback** (Callable): Custom callback to evaluate if response should be retried (default: None)
297
298
The evaluate_response_callback function receives a ClientResponse object and returns True if the request should be retried, False otherwise. This allows for custom retry logic based on response content, headers, or other factors beyond just status codes.