or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

async-support.mdcallbacks-hooks.mdcore-decorator.mdindex.mdretry-strategies.mdstop-conditions.mdutilities.mdwait-strategies.md

callbacks-hooks.mddocs/

0

# Callbacks & Hooks

1

2

Tenacity provides a comprehensive callback system with hooks that execute at different stages of the retry lifecycle. These callbacks enable logging, monitoring, custom actions, and integration with external systems during retry operations.

3

4

## Callback Lifecycle

5

6

Callbacks are executed in this order during retry operations:

7

8

1. **before**: Before each attempt (including the first)

9

2. **[attempt execution]**: Your function runs

10

3. **after**: After each attempt completes

11

4. **before_sleep**: Before sleeping between retries (only if retrying)

12

5. **[sleep period]**: Wait time elapses

13

6. **retry_error_callback**: When all retries are exhausted (if configured)

14

15

## Before Callbacks

16

17

Before callbacks execute immediately before each attempt, including the initial attempt.

18

19

### before_nothing

20

21

```python { .api }

22

from tenacity import before_nothing

23

24

def before_nothing(retry_state: RetryCallState) -> None:

25

"""

26

Default before callback that performs no action.

27

28

Parameters:

29

- retry_state: Complete state of current retry session

30

"""

31

```

32

33

### before_log

34

35

```python { .api }

36

from tenacity import before_log

37

import logging

38

39

def before_log(

40

logger: logging.Logger,

41

log_level: int

42

) -> Callable[[RetryCallState], None]:

43

"""

44

Create a before callback that logs attempt start.

45

46

Parameters:

47

- logger: Logger instance to use for output

48

- log_level: Logging level (e.g., logging.INFO, logging.WARNING)

49

50

Returns:

51

Callback function that logs before each attempt

52

"""

53

```

54

55

### Before Callback Examples

56

57

```python { .api }

58

import logging

59

60

# Basic logging before each attempt

61

logger = logging.getLogger(__name__)

62

63

@retry(

64

stop=stop_after_attempt(3),

65

before=before_log(logger, logging.INFO)

66

)

67

def logged_operation():

68

pass

69

70

# Custom before callback

71

def custom_before_callback(retry_state):

72

print(f"Starting attempt {retry_state.attempt_number}")

73

if retry_state.attempt_number > 1:

74

print(f"Previous attempt failed after {retry_state.seconds_since_start:.2f}s")

75

76

@retry(before=custom_before_callback)

77

def monitored_operation():

78

pass

79

80

# Metrics collection before callback

81

def metrics_before_callback(retry_state):

82

metrics.increment('operation.attempts', tags={

83

'function': retry_state.fn.__name__,

84

'attempt': retry_state.attempt_number

85

})

86

87

@retry(before=metrics_before_callback)

88

def instrumented_operation():

89

pass

90

```

91

92

## After Callbacks

93

94

After callbacks execute immediately after each attempt completes, whether successful or failed.

95

96

### after_nothing

97

98

```python { .api }

99

from tenacity import after_nothing

100

101

def after_nothing(retry_state: RetryCallState) -> None:

102

"""

103

Default after callback that performs no action.

104

105

Parameters:

106

- retry_state: Complete state of current retry session

107

"""

108

```

109

110

### after_log

111

112

```python { .api }

113

from tenacity import after_log

114

115

def after_log(

116

logger: logging.Logger,

117

log_level: int,

118

sec_format: str = "%0.3f"

119

) -> Callable[[RetryCallState], None]:

120

"""

121

Create an after callback that logs attempt completion.

122

123

Parameters:

124

- logger: Logger instance to use for output

125

- log_level: Logging level for the log message

126

- sec_format: Format string for displaying seconds (default: "%.3f")

127

128

Returns:

129

Callback function that logs after each attempt

130

"""

131

```

132

133

### After Callback Examples

134

135

```python { .api }

136

# Basic logging after each attempt

137

@retry(

138

stop=stop_after_attempt(3),

139

after=after_log(logger, logging.INFO)

140

)

141

def logged_operation():

142

pass

143

144

# Custom after callback with outcome analysis

145

def analyze_after_callback(retry_state):

146

if retry_state.outcome.failed:

147

exc = retry_state.outcome.result()

148

print(f"Attempt {retry_state.attempt_number} failed: {exc}")

149

else:

150

result = retry_state.outcome.result()

151

print(f"Attempt {retry_state.attempt_number} succeeded: {result}")

152

153

@retry(after=analyze_after_callback)

154

def analyzed_operation():

155

pass

156

157

# Performance monitoring after callback

158

def perf_after_callback(retry_state):

159

attempt_duration = time.time() - retry_state.outcome_timestamp

160

metrics.histogram('operation.attempt_duration', attempt_duration, tags={

161

'success': not retry_state.outcome.failed,

162

'attempt': retry_state.attempt_number

163

})

164

165

@retry(after=perf_after_callback)

166

def performance_monitored_operation():

167

pass

168

```

169

170

## Before Sleep Callbacks

171

172

Before sleep callbacks execute before waiting between retry attempts (not called after successful attempts).

173

174

### before_sleep_nothing

175

176

```python { .api }

177

from tenacity import before_sleep_nothing

178

179

def before_sleep_nothing(retry_state: RetryCallState) -> None:

180

"""

181

Default before sleep callback that performs no action.

182

183

Parameters:

184

- retry_state: Complete state of current retry session

185

"""

186

```

187

188

### before_sleep_log

189

190

```python { .api }

191

from tenacity import before_sleep_log

192

193

def before_sleep_log(

194

logger: logging.Logger,

195

log_level: int,

196

exc_info: bool = False

197

) -> Callable[[RetryCallState], None]:

198

"""

199

Create a before sleep callback that logs retry reason and sleep time.

200

201

Parameters:

202

- logger: Logger instance to use for output

203

- log_level: Logging level for the log message

204

- exc_info: Whether to include exception information in logs

205

206

Returns:

207

Callback function that logs before sleeping between retries

208

"""

209

```

210

211

### Before Sleep Callback Examples

212

213

```python { .api }

214

# Basic sleep logging

215

@retry(

216

stop=stop_after_attempt(3),

217

wait=wait_exponential(multiplier=1),

218

before_sleep=before_sleep_log(logger, logging.WARNING)

219

)

220

def sleep_logged_operation():

221

pass

222

223

# Custom sleep callback with detailed info

224

def detailed_sleep_callback(retry_state):

225

if retry_state.outcome.failed:

226

exc = retry_state.outcome.result()

227

print(f"Retrying due to {type(exc).__name__}: {exc}")

228

print(f"Sleeping for {retry_state.upcoming_sleep:.2f} seconds...")

229

print(f"Total elapsed: {retry_state.seconds_since_start:.2f}s")

230

231

@retry(before_sleep=detailed_sleep_callback)

232

def detailed_retry_operation():

233

pass

234

235

# Exponential backoff notification

236

def backoff_notification_callback(retry_state):

237

notify_monitoring_system({

238

'event': 'retry_backoff',

239

'attempt': retry_state.attempt_number,

240

'sleep_duration': retry_state.upcoming_sleep,

241

'total_elapsed': retry_state.seconds_since_start,

242

'function': retry_state.fn.__name__

243

})

244

245

@retry(before_sleep=backoff_notification_callback)

246

def monitored_retry_operation():

247

pass

248

```

249

250

## Retry Error Callbacks

251

252

Retry error callbacks execute when all retry attempts are exhausted and a RetryError is about to be raised.

253

254

### Retry Error Callback Usage

255

256

```python { .api }

257

def retry_error_callback(retry_state: RetryCallState) -> Any:

258

"""

259

Callback executed when retries are exhausted.

260

261

Parameters:

262

- retry_state: Final state of the retry session

263

264

Returns:

265

Any value (return value is typically ignored)

266

"""

267

# Log final failure

268

logger.error(f"All retries exhausted for {retry_state.fn.__name__}")

269

270

# Send alert

271

send_alert({

272

'function': retry_state.fn.__name__,

273

'attempts': retry_state.attempt_number,

274

'total_time': retry_state.seconds_since_start,

275

'final_exception': str(retry_state.outcome.result())

276

})

277

278

@retry(

279

stop=stop_after_attempt(3),

280

retry_error_callback=retry_error_callback

281

)

282

def critical_operation():

283

pass

284

```

285

286

## Combining Callbacks

287

288

Multiple callback types can be used together for comprehensive monitoring:

289

290

```python { .api }

291

# Complete callback setup for production monitoring

292

production_logger = logging.getLogger('production')

293

294

def production_before(retry_state):

295

production_logger.info(

296

f"Starting {retry_state.fn.__name__} attempt {retry_state.attempt_number}"

297

)

298

299

def production_after(retry_state):

300

if retry_state.outcome.failed:

301

exc = retry_state.outcome.result()

302

production_logger.warning(

303

f"Attempt {retry_state.attempt_number} failed: {type(exc).__name__}"

304

)

305

306

def production_sleep(retry_state):

307

production_logger.info(

308

f"Retrying in {retry_state.upcoming_sleep}s "

309

f"(elapsed: {retry_state.seconds_since_start:.1f}s)"

310

)

311

312

def production_error(retry_state):

313

production_logger.error(

314

f"All retries failed for {retry_state.fn.__name__} "

315

f"after {retry_state.attempt_number} attempts "

316

f"in {retry_state.seconds_since_start:.1f}s"

317

)

318

319

@retry(

320

stop=stop_after_attempt(5),

321

wait=wait_exponential(multiplier=1, min=1, max=10),

322

before=production_before,

323

after=production_after,

324

before_sleep=production_sleep,

325

retry_error_callback=production_error

326

)

327

def production_api_call():

328

pass

329

```

330

331

## Async Callbacks

332

333

All callbacks can be async functions when using AsyncRetrying:

334

335

```python { .api }

336

# Async callback examples

337

async def async_before_callback(retry_state):

338

await log_attempt_to_database(

339

function=retry_state.fn.__name__,

340

attempt=retry_state.attempt_number

341

)

342

343

async def async_after_callback(retry_state):

344

if retry_state.outcome.failed:

345

await record_failure_metrics(retry_state)

346

else:

347

await record_success_metrics(retry_state)

348

349

async def async_sleep_callback(retry_state):

350

await update_retry_dashboard({

351

'function': retry_state.fn.__name__,

352

'status': 'retrying',

353

'next_attempt_in': retry_state.upcoming_sleep

354

})

355

356

async def async_error_callback(retry_state):

357

await send_failure_notification({

358

'function': retry_state.fn.__name__,

359

'final_state': retry_state

360

})

361

362

@retry(

363

stop=stop_after_attempt(3),

364

before=async_before_callback,

365

after=async_after_callback,

366

before_sleep=async_sleep_callback,

367

retry_error_callback=async_error_callback

368

)

369

async def async_operation_with_callbacks():

370

pass

371

```

372

373

## Advanced Callback Patterns

374

375

### Stateful Callbacks

376

377

```python { .api }

378

class RetryMetrics:

379

def __init__(self):

380

self.attempt_times = []

381

self.failure_reasons = []

382

383

def before_callback(self, retry_state):

384

self.attempt_times.append(time.time())

385

386

def after_callback(self, retry_state):

387

if retry_state.outcome.failed:

388

exc = retry_state.outcome.result()

389

self.failure_reasons.append(type(exc).__name__)

390

391

def error_callback(self, retry_state):

392

print(f"Final metrics:")

393

print(f" Attempts: {len(self.attempt_times)}")

394

print(f" Failure types: {set(self.failure_reasons)}")

395

print(f" Total duration: {retry_state.seconds_since_start:.2f}s")

396

397

# Usage with stateful callbacks

398

metrics = RetryMetrics()

399

400

@retry(

401

stop=stop_after_attempt(5),

402

before=metrics.before_callback,

403

after=metrics.after_callback,

404

retry_error_callback=metrics.error_callback

405

)

406

def operation_with_metrics():

407

pass

408

```

409

410

### Conditional Callbacks

411

412

```python { .api }

413

def conditional_sleep_callback(retry_state):

414

# Only log for longer sleep periods

415

if retry_state.upcoming_sleep > 5:

416

logger.warning(

417

f"Long backoff: sleeping {retry_state.upcoming_sleep}s "

418

f"after attempt {retry_state.attempt_number}"

419

)

420

421

# Send alerts after multiple failures

422

if retry_state.attempt_number >= 3:

423

send_alert(f"Multiple failures in {retry_state.fn.__name__}")

424

425

@retry(before_sleep=conditional_sleep_callback)

426

def monitored_operation():

427

pass

428

```

429

430

### Callback Chaining

431

432

```python { .api }

433

def chain_callbacks(*callbacks):

434

"""Chain multiple callbacks together."""

435

def chained_callback(retry_state):

436

for callback in callbacks:

437

if callback: # Skip None callbacks

438

callback(retry_state)

439

return chained_callback

440

441

# Combine multiple callback functions

442

logging_callback = before_log(logger, logging.INFO)

443

metrics_callback = lambda rs: metrics.record_attempt(rs)

444

alert_callback = lambda rs: maybe_send_alert(rs)

445

446

combined_before = chain_callbacks(

447

logging_callback,

448

metrics_callback,

449

alert_callback

450

)

451

452

@retry(before=combined_before)

453

def multi_callback_operation():

454

pass

455

```

456

457

### Callback Factories

458

459

```python { .api }

460

def create_monitoring_callbacks(service_name, alert_threshold=3):

461

"""Factory for creating consistent monitoring callbacks."""

462

463

def before_callback(retry_state):

464

metrics.increment(f'{service_name}.attempts')

465

466

def after_callback(retry_state):

467

if retry_state.outcome.failed:

468

metrics.increment(f'{service_name}.failures')

469

else:

470

metrics.increment(f'{service_name}.successes')

471

472

def sleep_callback(retry_state):

473

if retry_state.attempt_number >= alert_threshold:

474

send_alert(f'{service_name} experiencing repeated failures')

475

476

return before_callback, after_callback, sleep_callback

477

478

# Use factory for consistent monitoring

479

before_cb, after_cb, sleep_cb = create_monitoring_callbacks('user_service')

480

481

@retry(

482

stop=stop_after_attempt(5),

483

before=before_cb,

484

after=after_cb,

485

before_sleep=sleep_cb

486

)

487

def user_service_operation():

488

pass

489

```

490

491

## Debugging and Development Callbacks

492

493

### Debug Callbacks

494

495

```python { .api }

496

def debug_callback_suite():

497

"""Comprehensive debug callbacks for development."""

498

499

def debug_before(retry_state):

500

print(f"\n--- Attempt {retry_state.attempt_number} ---")

501

print(f"Function: {retry_state.fn.__name__}")

502

print(f"Args: {retry_state.args}")

503

print(f"Kwargs: {retry_state.kwargs}")

504

505

def debug_after(retry_state):

506

print(f"Outcome: {'FAILED' if retry_state.outcome.failed else 'SUCCESS'}")

507

if retry_state.outcome.failed:

508

print(f"Exception: {retry_state.outcome.result()}")

509

else:

510

print(f"Result: {retry_state.outcome.result()}")

511

512

def debug_sleep(retry_state):

513

print(f"Sleeping for {retry_state.upcoming_sleep}s")

514

print(f"Total elapsed: {retry_state.seconds_since_start:.2f}s")

515

print(f"Total idle time: {retry_state.idle_for:.2f}s")

516

517

return debug_before, debug_after, debug_sleep

518

519

# Apply debug callbacks

520

debug_before, debug_after, debug_sleep = debug_callback_suite()

521

522

@retry(

523

stop=stop_after_attempt(3),

524

wait=wait_exponential(multiplier=1),

525

before=debug_before,

526

after=debug_after,

527

before_sleep=debug_sleep

528

)

529

def debug_operation():

530

pass

531

```

532

533

### Testing Callbacks

534

535

```python { .api }

536

class TestCallbacks:

537

"""Callback suite for testing retry behavior."""

538

539

def __init__(self):

540

self.attempts = []

541

self.failures = []

542

self.sleep_times = []

543

544

def before_callback(self, retry_state):

545

self.attempts.append(retry_state.attempt_number)

546

547

def after_callback(self, retry_state):

548

if retry_state.outcome.failed:

549

self.failures.append(retry_state.outcome.result())

550

551

def sleep_callback(self, retry_state):

552

self.sleep_times.append(retry_state.upcoming_sleep)

553

554

def verify_behavior(self, expected_attempts, expected_failures):

555

assert len(self.attempts) == expected_attempts

556

assert len(self.failures) == expected_failures - 1 # Last attempt might succeed

557

558

# Usage in tests

559

def test_retry_behavior():

560

test_callbacks = TestCallbacks()

561

562

@retry(

563

stop=stop_after_attempt(3),

564

wait=wait_fixed(1),

565

before=test_callbacks.before_callback,

566

after=test_callbacks.after_callback,

567

before_sleep=test_callbacks.sleep_callback

568

)

569

def failing_function():

570

raise ValueError("Test failure")

571

572

with pytest.raises(RetryError):

573

failing_function()

574

575

test_callbacks.verify_behavior(expected_attempts=3, expected_failures=3)

576

```

577

578

## Integration Examples

579

580

### Prometheus Metrics Integration

581

582

```python { .api }

583

from prometheus_client import Counter, Histogram, Gauge

584

585

# Prometheus metrics

586

retry_attempts = Counter('retry_attempts_total', 'Total retry attempts', ['function'])

587

retry_duration = Histogram('retry_duration_seconds', 'Retry operation duration', ['function'])

588

active_retries = Gauge('active_retries', 'Currently active retry operations', ['function'])

589

590

def prometheus_before_callback(retry_state):

591

retry_attempts.labels(function=retry_state.fn.__name__).inc()

592

if retry_state.attempt_number == 1:

593

active_retries.labels(function=retry_state.fn.__name__).inc()

594

595

def prometheus_error_callback(retry_state):

596

active_retries.labels(function=retry_state.fn.__name__).dec()

597

retry_duration.labels(function=retry_state.fn.__name__).observe(

598

retry_state.seconds_since_start

599

)

600

601

@retry(

602

before=prometheus_before_callback,

603

retry_error_callback=prometheus_error_callback

604

)

605

def monitored_api_call():

606

pass

607

```

608

609

### Structured Logging Integration

610

611

```python { .api }

612

import structlog

613

614

structured_logger = structlog.get_logger()

615

616

def structured_logging_callbacks():

617

def before_callback(retry_state):

618

structured_logger.info(

619

"retry_attempt_start",

620

function=retry_state.fn.__name__,

621

attempt=retry_state.attempt_number,

622

elapsed_seconds=retry_state.seconds_since_start

623

)

624

625

def after_callback(retry_state):

626

structured_logger.info(

627

"retry_attempt_complete",

628

function=retry_state.fn.__name__,

629

attempt=retry_state.attempt_number,

630

success=not retry_state.outcome.failed,

631

elapsed_seconds=retry_state.seconds_since_start

632

)

633

634

return before_callback, after_callback

635

636

before_cb, after_cb = structured_logging_callbacks()

637

638

@retry(before=before_cb, after=after_cb)

639

def structured_logged_operation():

640

pass

641

```

642

643

This comprehensive callback system provides extensive hooks for monitoring, logging, alerting, and custom actions throughout the retry lifecycle, enabling full observability and control over retry behavior.