0
# Core Optimization
1
2
Core optimization functions and classes for general-purpose optimization with CMA-ES. This includes the main interfaces for function minimization and the ask-and-tell optimization loop.
3
4
## Main Optimization Functions
5
6
### fmin2 - Main Interface Function
7
8
```python { .api }
9
def fmin2(
10
objective_function,
11
x0,
12
sigma0,
13
options=None,
14
args=(),
15
gradf=None,
16
restarts=0,
17
restart_from_best=False,
18
incpopsize=2,
19
eval_initial_x=False,
20
parallel_objective=None,
21
noise_handler=None,
22
noise_change_sigma_exponent=1,
23
noise_kappa_exponent=0,
24
bipop=False,
25
callback=None,
26
init_callback=None
27
):
28
"""
29
Functional interface to CMA-ES for non-convex function minimization.
30
31
This is the main recommended interface for CMA-ES optimization with
32
optional restarts and noise handling capabilities.
33
34
Parameters:
35
-----------
36
objective_function : callable
37
Function to minimize, called as objective_function(x, *args).
38
Should return a scalar value. Can return numpy.NaN to reject
39
solution x (triggers resampling without counting as evaluation).
40
41
x0 : array-like or callable
42
Initial solution estimate (phenotype coordinates).
43
Can be a callable that returns initial guess for each restart.
44
Can also be a CMAEvolutionStrategy instance.
45
46
sigma0 : float or array-like
47
Initial standard deviation (step-size). Should be about 1/4th
48
of the search domain width. Use None if x0 is CMAEvolutionStrategy.
49
50
options : dict, optional
51
CMA-ES options dictionary. See CMAOptions() for available options.
52
Common options:
53
- 'ftarget': target function value (default 1e-11)
54
- 'maxfevals': max function evaluations (default inf)
55
- 'maxiter': max iterations (default 100+50*(N+3)**2//popsize**0.5)
56
- 'popsize': population size (default 4+floor(3*log(N)))
57
- 'bounds': box constraints [[lower_bounds], [upper_bounds]]
58
- 'tolfun': function value tolerance (default 1e-11)
59
- 'tolx': solution tolerance (default 1e-11)
60
61
args : tuple, optional
62
Additional arguments passed to objective_function.
63
64
gradf : callable, optional
65
Gradient function where len(gradf(x, *args)) == len(x).
66
Called once per iteration if provided.
67
68
restarts : int or dict, optional
69
Number of IPOP restarts with increasing population size.
70
If dict, keys 'maxrestarts' and 'maxfevals' are recognized.
71
72
restart_from_best : bool, optional
73
Whether to restart from best solution found (default False).
74
75
incpopsize : float, optional
76
Population size multiplier for restarts (default 2).
77
78
eval_initial_x : bool, optional
79
Whether to evaluate initial solution x0 (default False).
80
81
parallel_objective : callable, optional
82
Function for parallel evaluation: parallel_objective(list_of_x, *args)
83
should return list of function values.
84
85
noise_handler : NoiseHandler, optional
86
Handler for noisy objective functions.
87
88
noise_change_sigma_exponent : float, optional
89
Exponent for sigma adaptation with noise (default 1).
90
91
noise_kappa_exponent : float, optional
92
Exponent for noise level adaptation (default 0).
93
94
bipop : bool, optional
95
Use BIPOP restart strategy (default False).
96
97
callback : callable, optional
98
Function called after each iteration: callback(CMAEvolutionStrategy).
99
100
init_callback : callable, optional
101
Function called after initialization: init_callback(CMAEvolutionStrategy).
102
103
Returns:
104
--------
105
tuple[numpy.ndarray, CMAEvolutionStrategy]
106
(xbest, es) where:
107
- xbest: best solution found
108
- es: CMAEvolutionStrategy instance with complete results
109
110
Examples:
111
---------
112
>>> import cma
113
>>>
114
>>> # Simple optimization
115
>>> def sphere(x):
116
... return sum(x**2)
117
>>>
118
>>> x, es = cma.fmin2(sphere, [1, 2, 3], 0.5)
119
>>> print(f"Best solution: {x}")
120
>>> print(f"Function value: {es.result.fbest}")
121
>>>
122
>>> # With options
123
>>> x, es = cma.fmin2(sphere, [1, 2, 3], 0.5,
124
... options={'maxfevals': 1000, 'popsize': 20})
125
>>>
126
>>> # With restarts
127
>>> x, es = cma.fmin2(sphere, [1, 2, 3], 0.5, restarts=2)
128
>>>
129
>>> # Box constraints
130
>>> bounds = [[-5, -5, -5], [5, 5, 5]]
131
>>> x, es = cma.fmin2(sphere, [1, 2, 3], 0.5,
132
... options={'bounds': bounds})
133
>>>
134
>>> # Parallel evaluation
135
>>> def parallel_sphere(X_list, *args):
136
... return [sum(x**2) for x in X_list]
137
>>>
138
>>> x, es = cma.fmin2(None, [1, 2, 3], 0.5,
139
... parallel_objective=parallel_sphere)
140
"""
141
pass
142
```
143
144
### fmin - Legacy Interface (Deprecated)
145
146
```python { .api }
147
def fmin(objective_function, x0, sigma0, *posargs, **kwargs):
148
"""
149
DEPRECATED: Use fmin2 instead.
150
151
This function remains fully functional and maintained for backward
152
compatibility but fmin2 is recommended for new code.
153
154
Parameters:
155
-----------
156
Same as fmin2.
157
158
Returns:
159
--------
160
list
161
Extended result list: [xbest, fbest, evals_best, evaluations,
162
iterations, xfavorite, stds, stop_dict, es, logger]
163
164
Notes:
165
------
166
The relationship between fmin and fmin2:
167
>>> res = fmin(objective, x0, sigma0)
168
>>> x, es = fmin2(objective, x0, sigma0) # equivalent to:
169
>>> x, es = res[0], res[-2]
170
"""
171
pass
172
```
173
174
## CMAEvolutionStrategy Class
175
176
The main CMA-ES optimizer class providing fine-grained control through the ask-and-tell interface.
177
178
### Class Definition
179
180
```python { .api }
181
class CMAEvolutionStrategy:
182
"""
183
CMA-ES stochastic optimizer class with ask-and-tell interface.
184
185
This class provides the most flexible interface to CMA-ES, allowing
186
users to control the optimization loop iteration by iteration.
187
"""
188
189
def __init__(self, x0, sigma0, inopts=None):
190
"""
191
Initialize CMA-ES optimizer.
192
193
Parameters:
194
-----------
195
x0 : array-like
196
Initial solution estimate, determines problem dimension N.
197
Given as "phenotype" coordinates (after transformation if applied).
198
199
sigma0 : float or array-like
200
Initial standard deviation(s). Should be about 1/4th of search
201
domain width. Problem variables should be scaled so that single
202
standard deviation is meaningful across all variables.
203
204
inopts : dict, optional
205
Options dictionary. See CMAOptions() for available options.
206
Key options include:
207
- 'bounds': [[lower], [upper]] for box constraints
208
- 'maxiter': maximum iterations
209
- 'popsize': population size
210
- 'seed': random seed for reproducibility
211
- 'verb_disp': display verbosity level
212
213
Examples:
214
---------
215
>>> import cma
216
>>>
217
>>> # Basic initialization
218
>>> es = cma.CMAEvolutionStrategy([0, 0, 0], 0.5)
219
>>>
220
>>> # With options
221
>>> opts = {'popsize': 20, 'maxiter': 1000, 'seed': 123}
222
>>> es = cma.CMAEvolutionStrategy([0, 0, 0], 0.5, opts)
223
>>>
224
>>> # Different sigma per coordinate
225
>>> es = cma.CMAEvolutionStrategy([0, 0, 0], [0.5, 1.0, 0.2])
226
"""
227
pass
228
```
229
230
### Core Methods
231
232
```python { .api }
233
def ask(self, number=None, xmean=None, gradf=None, args=()):
234
"""
235
Sample new candidate solutions from current distribution.
236
237
Parameters:
238
-----------
239
number : int, optional
240
Number of solutions to return. Default is population size.
241
242
xmean : array-like, optional
243
Distribution mean override. If None, uses current mean.
244
245
gradf : callable, optional
246
Gradient function for mean shift. Called as gradf(xmean, *args).
247
248
args : tuple, optional
249
Arguments passed to gradf.
250
251
Returns:
252
--------
253
list[numpy.ndarray]
254
List of candidate solutions (phenotype coordinates).
255
256
Examples:
257
---------
258
>>> es = cma.CMAEvolutionStrategy([0, 0, 0], 0.5)
259
>>> solutions = es.ask()
260
>>> len(solutions) == es.popsize
261
True
262
>>>
263
>>> # Ask for specific number of solutions
264
>>> solutions = es.ask(number=10)
265
>>> len(solutions)
266
10
267
"""
268
pass
269
270
def tell(self, arx, fitnesses, check_points=True, copy=False):
271
"""
272
Update distribution parameters based on candidate evaluations.
273
274
Parameters:
275
-----------
276
arx : list[array-like]
277
List of candidate solutions (same as returned by ask()).
278
279
fitnesses : array-like
280
Corresponding fitness values. Lower values indicate better solutions.
281
Can contain numpy.inf for infeasible solutions.
282
283
check_points : bool, optional
284
Whether to check input validity (default True).
285
286
copy : bool, optional
287
Whether to copy input arrays (default False).
288
289
Examples:
290
---------
291
>>> import cma
292
>>>
293
>>> def objective(x):
294
... return sum(x**2)
295
>>>
296
>>> es = cma.CMAEvolutionStrategy([1, 2, 3], 0.5)
297
>>> solutions = es.ask()
298
>>> fitnesses = [objective(x) for x in solutions]
299
>>> es.tell(solutions, fitnesses)
300
>>>
301
>>> # Handle infeasible solutions
302
>>> fitnesses_with_inf = [objective(x) if feasible(x) else np.inf
303
... for x in solutions]
304
>>> es.tell(solutions, fitnesses_with_inf)
305
"""
306
pass
307
308
def stop(self):
309
"""
310
Check termination criteria.
311
312
Returns:
313
--------
314
dict or False
315
Dictionary of active termination conditions if optimization
316
should stop, False otherwise. Common termination reasons:
317
- 'ftarget': target function value reached
318
- 'maxfevals': maximum function evaluations exceeded
319
- 'maxiter': maximum iterations exceeded
320
- 'tolx': solution tolerance reached
321
- 'tolfun': function value tolerance reached
322
- 'stagnation': no improvement for many iterations
323
324
Examples:
325
---------
326
>>> es = cma.CMAEvolutionStrategy([0, 0, 0], 0.5)
327
>>> es.stop() # Initially False
328
False
329
>>>
330
>>> # After optimization
331
>>> while not es.stop():
332
... X = es.ask()
333
... es.tell(X, [sum(x**2) for x in X])
334
>>>
335
>>> termination = es.stop()
336
>>> print(f"Stopped because: {list(termination.keys())}")
337
"""
338
pass
339
340
def optimize(self, objective_function, iterations=None, args=(), **kwargs):
341
"""
342
Convenience method to run complete optimization loop.
343
344
Equivalent to running ask-tell loop until termination criteria met.
345
346
Parameters:
347
-----------
348
objective_function : callable
349
Function to minimize, called as objective_function(x, *args).
350
351
iterations : int, optional
352
Maximum number of iterations to run.
353
354
args : tuple, optional
355
Additional arguments for objective_function.
356
357
**kwargs : dict
358
Additional keyword arguments (callback, etc.).
359
360
Returns:
361
--------
362
CMAEvolutionStrategy
363
Self, for method chaining.
364
365
Examples:
366
---------
367
>>> import cma
368
>>>
369
>>> def rosenbrock(x):
370
... return sum(100*(x[1:] - x[:-1]**2)**2 + (1 - x[:-1])**2)
371
>>>
372
>>> es = cma.CMAEvolutionStrategy(4*[0.1], 0.5)
373
>>> es = es.optimize(rosenbrock)
374
>>> print(f"Best solution: {es.result.xbest}")
375
>>> print(f"Function value: {es.result.fbest}")
376
>>>
377
>>> # Method chaining
378
>>> result = cma.CMAEvolutionStrategy(4*[0.1], 0.5).optimize(rosenbrock).result
379
"""
380
pass
381
```
382
383
### Properties and Results
384
385
```python { .api }
386
@property
387
def result(self):
388
"""
389
Current optimization result.
390
391
Returns:
392
--------
393
CMAEvolutionStrategyResult
394
Named tuple with fields:
395
- xbest: best solution found so far
396
- fbest: best function value found
397
- evals_best: evaluations when best was found
398
- evaluations: total function evaluations
399
- iterations: total iterations completed
400
- xfavorite: current mean of distribution (often better with noise)
401
- stds: current standard deviations per coordinate
402
- stop: termination conditions dictionary (if stopped)
403
404
Examples:
405
---------
406
>>> es = cma.CMAEvolutionStrategy([0, 0, 0], 0.5)
407
>>> es.optimize(lambda x: sum(x**2))
408
>>>
409
>>> result = es.result
410
>>> print(f"Best solution: {result.xbest}")
411
>>> print(f"Best fitness: {result.fbest}")
412
>>> print(f"Evaluations used: {result.evaluations}")
413
>>> print(f"Final mean: {result.xfavorite}")
414
>>> print(f"Final stds: {result.stds}")
415
"""
416
pass
417
418
@property
419
def popsize(self):
420
"""Population size (number of offspring per iteration)."""
421
pass
422
423
@property
424
def sigma(self):
425
"""Current overall step size (scalar)."""
426
pass
427
428
@property
429
def mean(self):
430
"""Current distribution mean (in genotype coordinates)."""
431
pass
432
433
def result_pretty(self, number_of_runs=0, time_str=None, fbestever=None):
434
"""
435
Pretty print optimization results.
436
437
Parameters:
438
-----------
439
number_of_runs : int, optional
440
Number of restarts performed.
441
442
time_str : str, optional
443
Elapsed time string.
444
445
fbestever : float, optional
446
Best function value over all runs.
447
448
Returns:
449
--------
450
str
451
Formatted result summary.
452
"""
453
pass
454
455
def disp(self, iteration=None):
456
"""
457
Display current optimization status.
458
459
Parameters:
460
-----------
461
iteration : int, optional
462
Current iteration number for display.
463
"""
464
pass
465
```
466
467
### Utility Methods
468
469
```python { .api }
470
def pickle_dumps(self):
471
"""
472
Return pickle.dumps(self) with special handling for lambda functions.
473
474
Returns:
475
--------
476
bytes
477
Pickled representation of the optimizer state.
478
"""
479
pass
480
481
@staticmethod
482
def pickle_loads(s):
483
"""
484
Inverse of pickle_dumps.
485
486
Parameters:
487
-----------
488
s : bytes
489
Pickled optimizer state.
490
491
Returns:
492
--------
493
CMAEvolutionStrategy
494
Restored optimizer instance.
495
"""
496
pass
497
498
def copy(self):
499
"""
500
Create a (deep) copy of the optimizer.
501
502
Returns:
503
--------
504
CMAEvolutionStrategy
505
Independent copy of the optimizer.
506
"""
507
pass
508
```
509
510
## Usage Patterns
511
512
### Basic Ask-and-Tell Loop
513
514
```python { .api }
515
import cma
516
import numpy as np
517
518
def sphere(x):
519
return sum(x**2)
520
521
# Initialize optimizer
522
es = cma.CMAEvolutionStrategy(5 * [0.1], 0.3)
523
524
# Optimization loop
525
while not es.stop():
526
# Get candidate solutions
527
solutions = es.ask()
528
529
# Evaluate objective function
530
fitness_values = [sphere(x) for x in solutions]
531
532
# Update optimizer
533
es.tell(solutions, fitness_values)
534
535
# Optional: display progress
536
es.disp()
537
538
# Results
539
print(f"Best solution: {es.result.xbest}")
540
print(f"Best fitness: {es.result.fbest}")
541
print(es.result_pretty())
542
```
543
544
### Parallel Evaluation
545
546
```python { .api }
547
import cma
548
from multiprocessing import Pool
549
550
def objective(x):
551
# Expensive computation
552
return sum(x**2) + 0.1 * sum(np.sin(30 * x))
553
554
def evaluate_batch(solutions):
555
"""Evaluate solutions in parallel."""
556
with Pool() as pool:
557
return pool.map(objective, solutions)
558
559
# Optimization with parallel evaluation
560
es = cma.CMAEvolutionStrategy(10 * [0.1], 0.5)
561
562
while not es.stop():
563
solutions = es.ask()
564
fitness_values = evaluate_batch(solutions)
565
es.tell(solutions, fitness_values)
566
567
if es.countiter % 10 == 0:
568
es.disp()
569
570
print(f"Optimization completed in {es.result.iterations} iterations")
571
```
572
573
### Custom Termination and Callbacks
574
575
```python { .api }
576
import cma
577
578
def custom_callback(es):
579
"""Custom callback function called each iteration."""
580
if es.countiter % 50 == 0:
581
print(f"Iteration {es.countiter}: best = {es.result.fbest:.6f}")
582
583
# Custom termination condition
584
if es.result.fbest < 1e-8:
585
es.opts['ftarget'] = es.result.fbest # Trigger termination
586
587
def objective(x):
588
return sum((x - 1)**2)
589
590
# Optimization with callback
591
es = cma.CMAEvolutionStrategy([0, 0, 0], 0.5)
592
593
while not es.stop():
594
solutions = es.ask()
595
fitness_values = [objective(x) for x in solutions]
596
es.tell(solutions, fitness_values)
597
custom_callback(es)
598
599
print("Optimization terminated")
600
print(f"Reason: {list(es.stop().keys())}")
601
```
602
603
### State Management
604
605
```python { .api }
606
import cma
607
import pickle
608
609
# Initialize and run optimization
610
es = cma.CMAEvolutionStrategy([0, 0, 0], 0.5)
611
612
# Save state after 100 iterations
613
for _ in range(100):
614
if es.stop():
615
break
616
solutions = es.ask()
617
fitness_values = [sum(x**2) for x in solutions]
618
es.tell(solutions, fitness_values)
619
620
# Save optimizer state
621
state = es.pickle_dumps()
622
# Or with regular pickle: state = pickle.dumps(es)
623
624
# Later: restore and continue
625
es_restored = cma.CMAEvolutionStrategy.pickle_loads(state)
626
# Or: es_restored = pickle.loads(state)
627
628
# Continue optimization
629
while not es_restored.stop():
630
solutions = es_restored.ask()
631
fitness_values = [sum(x**2) for x in solutions]
632
es_restored.tell(solutions, fitness_values)
633
634
print(f"Total evaluations: {es_restored.result.evaluations}")
635
```
636
637
## CMA Alias
638
639
```python { .api }
640
# CMA is a shortcut alias for CMAEvolutionStrategy
641
import cma
642
643
# These are equivalent:
644
es1 = cma.CMAEvolutionStrategy([0, 0, 0], 0.5)
645
es2 = cma.CMA([0, 0, 0], 0.5)
646
647
# Useful for shorter typing without IDE completion
648
CMA = cma.CMAEvolutionStrategy
649
es = CMA([0, 0, 0], 0.5)
650
```