0
# Monitoring and Callbacks
1
2
Comprehensive callback system for optimization monitoring, logging, progress tracking, early stopping, and state persistence during optimization runs. Callbacks enable real-time observation and control of the optimization process.
3
4
## Capabilities
5
6
### Progress Display Callbacks
7
8
Visual feedback and progress tracking during optimization runs with configurable update intervals and display formats.
9
10
```python { .api }
11
class OptimizationPrinter:
12
"""
13
Prints optimization progress at regular intervals.
14
15
Parameters:
16
- print_interval_tells: Print every N tell operations (int, default=1)
17
- print_interval_seconds: Print every N seconds (float, default=60.0)
18
"""
19
20
def __init__(self, print_interval_tells: int = 1, print_interval_seconds: float = 60.0):
21
"""Initialize printer with update intervals."""
22
23
class ProgressBar:
24
"""
25
Displays text-based progress bar during optimization.
26
27
Shows current progress through budget with estimated time remaining
28
and current best value.
29
"""
30
```
31
32
### Logging Callbacks
33
34
Structured logging capabilities for optimization runs with configurable log levels and detailed parameter tracking.
35
36
```python { .api }
37
class OptimizationLogger:
38
"""
39
Logs optimization progress to logger.
40
41
Parameters:
42
- logger: Logger instance (default: global_logger)
43
- log_level: Logging level (default: logging.INFO)
44
- log_interval_tells: Log every N tell operations (int, default=1)
45
- log_interval_seconds: Log every N seconds (float, default=60.0)
46
"""
47
48
def __init__(
49
self,
50
*,
51
logger=None,
52
log_level: int = None,
53
log_interval_tells: int = 1,
54
log_interval_seconds: float = 60.0
55
):
56
"""Initialize logger with configuration."""
57
58
class ParametersLogger:
59
"""
60
Logs detailed parameter information during optimization.
61
62
Records parameter values, mutations, and evolution history
63
for detailed analysis of optimization behavior.
64
"""
65
```
66
67
### State Persistence Callbacks
68
69
Checkpoint and recovery capabilities for long-running optimization tasks with automatic state saving and restoration.
70
71
```python { .api }
72
class OptimizerDump:
73
"""
74
Periodically dumps optimizer state to file for recovery.
75
76
Enables resuming optimization from checkpoints in case of
77
interruption or system failure.
78
79
Parameters:
80
- filepath: Path for state dump file
81
- dump_interval: Dump frequency in evaluations
82
"""
83
```
84
85
### Early Stopping Callbacks
86
87
Intelligent termination conditions based on convergence criteria, improvement thresholds, and custom stopping rules.
88
89
```python { .api }
90
class EarlyStopping:
91
"""
92
Implements early stopping based on various criteria.
93
94
Supports multiple stopping conditions including loss improvement
95
tolerance, duration limits, and custom stopping functions.
96
97
Parameters:
98
- improvement_tolerance: Minimum improvement threshold
99
- patience: Number of iterations to wait for improvement
100
- duration_limit: Maximum optimization duration
101
- custom_criterion: Custom stopping function
102
"""
103
104
def add_loss_improvement_tolerance_criterion(
105
self,
106
tolerance: float,
107
patience: int
108
) -> 'EarlyStopping':
109
"""
110
Add loss improvement tolerance stopping criterion.
111
112
Args:
113
tolerance: Minimum relative improvement required
114
patience: Number of iterations to wait for improvement
115
116
Returns:
117
Self for method chaining
118
"""
119
120
def add_duration_criterion(self, max_duration: float) -> 'EarlyStopping':
121
"""
122
Add duration-based stopping criterion.
123
124
Args:
125
max_duration: Maximum optimization duration in seconds
126
127
Returns:
128
Self for method chaining
129
"""
130
131
def add_custom_criterion(self, criterion_func: Callable) -> 'EarlyStopping':
132
"""
133
Add custom stopping criterion.
134
135
Args:
136
criterion_func: Function that returns True to stop optimization
137
138
Returns:
139
Self for method chaining
140
"""
141
```
142
143
### Internal Criterion Classes
144
145
Implementation classes for specific stopping criteria used by the EarlyStopping callback.
146
147
```python { .api }
148
class _DurationCriterion:
149
"""
150
Duration-based stopping criterion implementation.
151
152
Monitors optimization runtime and triggers stopping when
153
maximum duration is exceeded.
154
"""
155
156
class _LossImprovementToleranceCriterion:
157
"""
158
Loss improvement tolerance criterion implementation.
159
160
Tracks improvement in best loss value and triggers stopping
161
when improvement falls below threshold for specified patience.
162
"""
163
```
164
165
## Usage Examples
166
167
### Basic Progress Monitoring
168
169
```python
170
import nevergrad as ng
171
172
# Create optimizer with progress display
173
param = ng.p.Array(shape=(10,))
174
optimizer = ng.optimizers.CMA(parametrization=param, budget=100)
175
176
# Add progress bar
177
progress_callback = ng.callbacks.ProgressBar()
178
179
# Add to optimizer (implementation depends on optimizer interface)
180
# Manual integration in optimization loop:
181
def sphere(x):
182
return sum(x**2)
183
184
for i in range(optimizer.budget):
185
x = optimizer.ask()
186
loss = sphere(x.value)
187
optimizer.tell(x, loss)
188
189
# Manual progress update
190
if i % 10 == 0:
191
best = optimizer.provide_recommendation()
192
print(f"Iteration {i}: Best loss = {sphere(best.value):.6f}")
193
```
194
195
### Logging Configuration
196
197
```python
198
import logging
199
import nevergrad as ng
200
201
# Set up logging
202
logging.basicConfig(level=logging.INFO)
203
logger = logging.getLogger("optimization")
204
205
# Create logger callback
206
logger_callback = ng.callbacks.OptimizationLogger(
207
logger=logger,
208
log_level=logging.INFO,
209
log_interval_tells=5, # Log every 5 evaluations
210
log_interval_seconds=30.0 # Or every 30 seconds
211
)
212
213
# Create detailed parameter logger
214
param_logger = ng.callbacks.ParametersLogger()
215
216
# Use in optimization
217
optimizer = ng.optimizers.CMA(parametrization=param, budget=100)
218
# Integration depends on specific optimizer implementation
219
```
220
221
### Early Stopping Setup
222
223
```python
224
import nevergrad as ng
225
226
# Create early stopping with multiple criteria
227
early_stopping = ng.callbacks.EarlyStopping()
228
229
# Add improvement tolerance criterion
230
early_stopping.add_loss_improvement_tolerance_criterion(
231
tolerance=1e-6, # Must improve by at least 1e-6
232
patience=20 # Wait up to 20 iterations
233
)
234
235
# Add duration limit
236
early_stopping.add_duration_criterion(
237
max_duration=3600.0 # Stop after 1 hour
238
)
239
240
# Add custom criterion
241
def custom_stop_criterion(optimizer):
242
"""Stop if loss is below target."""
243
best = optimizer.provide_recommendation()
244
return sphere(best.value) < 1e-3
245
246
early_stopping.add_custom_criterion(custom_stop_criterion)
247
248
# Use with optimizer
249
optimizer = ng.optimizers.CMA(parametrization=param, budget=1000)
250
251
# Manual integration with early stopping
252
for i in range(optimizer.budget):
253
x = optimizer.ask()
254
loss = sphere(x.value)
255
optimizer.tell(x, loss)
256
257
# Check stopping criteria
258
if early_stopping.should_stop(optimizer):
259
print(f"Early stopping triggered at iteration {i}")
260
break
261
```
262
263
### State Persistence
264
265
```python
266
import nevergrad as ng
267
268
# Create optimizer dump for checkpointing
269
dump_callback = ng.callbacks.OptimizerDump(
270
filepath="optimizer_checkpoint.pkl",
271
dump_interval=50 # Save every 50 evaluations
272
)
273
274
# Combined callback usage
275
class OptimizationRunner:
276
def __init__(self, optimizer, callbacks=None):
277
self.optimizer = optimizer
278
self.callbacks = callbacks or []
279
self.iteration = 0
280
281
def run(self, function, budget):
282
for i in range(budget):
283
x = self.optimizer.ask()
284
loss = function(x.value)
285
self.optimizer.tell(x, loss)
286
287
# Execute callbacks
288
for callback in self.callbacks:
289
callback.on_iteration(self.optimizer, i, loss)
290
291
self.iteration += 1
292
293
# Usage
294
callbacks = [
295
ng.callbacks.ProgressBar(),
296
ng.callbacks.OptimizationLogger(log_interval_tells=10),
297
ng.callbacks.OptimizerDump(filepath="checkpoint.pkl", dump_interval=25)
298
]
299
300
runner = OptimizationRunner(optimizer, callbacks)
301
runner.run(sphere, 100)
302
```
303
304
### Custom Callback Implementation
305
306
```python
307
class CustomCallback:
308
"""Example custom callback implementation."""
309
310
def __init__(self, target_loss=1e-3):
311
self.target_loss = target_loss
312
self.best_losses = []
313
314
def on_iteration(self, optimizer, iteration, current_loss):
315
"""Called after each optimization iteration."""
316
best = optimizer.provide_recommendation()
317
best_loss = current_loss # Assuming current is best
318
self.best_losses.append(best_loss)
319
320
if best_loss < self.target_loss:
321
print(f"Target loss {self.target_loss} achieved at iteration {iteration}")
322
323
# Log every 25 iterations
324
if iteration % 25 == 0:
325
print(f"Iteration {iteration}: Best = {best_loss:.6f}")
326
327
def on_completion(self, optimizer):
328
"""Called when optimization completes."""
329
print(f"Optimization completed. Total evaluations: {len(self.best_losses)}")
330
print(f"Final best loss: {min(self.best_losses):.6f}")
331
332
# Usage
333
custom_callback = CustomCallback(target_loss=1e-4)
334
# Integrate with optimization loop as shown above
335
```
336
337
### Multi-Objective Callback Monitoring
338
339
```python
340
class MultiObjectiveCallback:
341
"""Callback for multi-objective optimization monitoring."""
342
343
def __init__(self, log_interval=10):
344
self.log_interval = log_interval
345
self.pareto_history = []
346
347
def on_iteration(self, optimizer, iteration, losses):
348
"""Monitor Pareto front evolution."""
349
if iteration % self.log_interval == 0:
350
pareto_front = optimizer.pareto_front()
351
self.pareto_history.append(len(pareto_front))
352
print(f"Iteration {iteration}: Pareto front size = {len(pareto_front)}")
353
354
def plot_pareto_evolution(self):
355
"""Plot evolution of Pareto front size."""
356
import matplotlib.pyplot as plt
357
plt.plot(range(0, len(self.pareto_history) * self.log_interval, self.log_interval),
358
self.pareto_history)
359
plt.xlabel("Iterations")
360
plt.ylabel("Pareto Front Size")
361
plt.title("Evolution of Pareto Front")
362
plt.show()
363
364
# Use with multi-objective optimization
365
mo_callback = MultiObjectiveCallback(log_interval=20)
366
```