A Python toolbox for performing gradient-free optimization with unified interfaces for optimization algorithms and parameter handling.
—
Parametrizable optimizer configurations that enable algorithm customization and automated hyperparameter tuning. Optimizer families provide factory patterns for creating specialized optimizer variants with configurable parameters and behavior.
Configurable Evolution Strategy algorithms with customizable parameters for adaptation, selection, and mutation strategies.
class ParametrizedOnePlusOne:
"""
Configurable (1+1) Evolution Strategy.
Parameters:
- noise_handling: Noise handling strategy
- mutation: Mutation type and parameters
- crossover: Crossover probability
- use_sphere: Use sphere mutation
"""
class EvolutionStrategy:
"""
Configurable Evolution Strategy family.
Parameters:
- recombination_weights: Recombination weight strategy
- popsize: Population size
- offsprings: Number of offspring
- ranker: Selection ranking method
"""Covariance Matrix Adaptation Evolution Strategy with extensive configuration options for different problem characteristics and computational budgets.
class ParametrizedCMA:
"""
Configurable CMA-ES with various parameters.
Parameters:
- scale: Coordinate scaling factor
- elitist: Use elitist strategy
- diagonal: Use diagonal adaptation
- fcmaes: Use fast CMA-ES
- popsize: Population size multiplier
- active: Use active covariance matrix adaptation
- random_init: Random initialization strategy
"""Configurable Differential Evolution algorithms with various mutation strategies, crossover types, and selection mechanisms.
class DifferentialEvolution:
"""
Configurable Differential Evolution family.
Parameters:
- initialization: Population initialization method
- scale: Mutation scale factor (F parameter)
- crossover: Crossover probability (CR parameter)
- popsize: Population size
- strategy: DE mutation strategy
"""Configurable Bayesian Optimization with customizable acquisition functions, kernel choices, and optimization strategies.
class ParametrizedBO:
"""
Configurable Bayesian Optimization.
Parameters:
- initialization: Initial design strategy
- middle_point: Use middle point initialization
- utility_kind: Acquisition function type
- utility_kappa: Exploration parameter
- utility_xi: Exploitation parameter
- gp_parameters: Gaussian process configuration
"""
class BayesOptim:
"""
Bayesian optimization configuration framework.
Parameters:
- random_state: Random state for reproducibility
- init_budget: Initial exploration budget
- middle_point: Middle point initialization
"""Configurable surrogate model-based optimization with various model types and learning strategies.
class ParametrizedMetaModel:
"""
Configurable metamodel optimization.
Parameters:
- model: Surrogate model type ("polynomial", "neural", "svm", "rf")
- acquisition: Acquisition strategy
- multivariate_optimizer: Underlying optimizer for metamodel
"""Configurable sampling-based search methods with various sequence types and initialization strategies.
class RandomSearchMaker:
"""
Configurable random search variants.
Parameters:
- sampler: Sampling strategy
- scrambled: Use scrambled sequences
- opposition_mode: Opposition-based learning mode
- cauchy: Use Cauchy distribution
"""
class SamplingSearch:
"""
Configurable sampling-based search methods.
Parameters:
- sampler: Base sampling method
- scrambled: Scrambling strategy
- autorescale: Automatic rescaling
- opposition_mode: Opposition learning
"""Multi-algorithm families that combine multiple optimization strategies with configurable selection and execution patterns.
class ConfPortfolio:
"""
Configured portfolio optimizer.
Parameters:
- optimizers: List of optimizer configurations
- weights: Selection weights for optimizers
- resampling: Resampling strategy
"""
class NonObjectOptimizer:
"""
Non-objective optimizer wrapper.
Parameters:
- method: Underlying optimization method
- random_state: Random state configuration
"""Chaining frameworks that combine different algorithms sequentially with configurable transition criteria and resource allocation.
class Chaining:
"""
Sequential optimizer chaining framework.
Parameters:
- optimizers: Sequence of optimizer configurations
- budgets: Budget allocation for each optimizer
- restart: Restart strategy between optimizers
"""
class NoisySplit:
"""
Noisy optimization with splitting strategies.
Parameters:
- num_optims: Number of parallel optimizers
- num_suggestions: Suggestions per optimizer
- discrete: Handle discrete variables
"""Configurable PSO algorithms with various topologies, parameter adaptation, and acceleration strategies.
class ConfPSO:
"""
Configured Particle Swarm Optimization.
Parameters:
- popsize: Swarm size
- omega: Inertia weight
- phip: Cognitive acceleration coefficient
- phig: Social acceleration coefficient
- max_speed: Maximum particle velocity
"""Specialized algorithm families for specific optimization scenarios and problem characteristics.
class ParametrizedTBPSA:
"""
Configurable TBPSA algorithm.
Parameters:
- naive: Use naive implementation
- initial_popsize: Initial population size
- max_offspring: Maximum offspring per generation
"""
class EMNA:
"""
Estimation of Multivariate Normal Algorithm.
Parameters:
- popsize: Population size
- sample_size: Sample size for distribution estimation
- naive: Use naive implementation
"""
class ConfSplitOptimizer:
"""
Configured split optimization strategies.
Parameters:
- num_optims: Number of sub-optimizers
- progressive: Progressive resource allocation
- non_deterministic_descriptor: Non-deterministic optimization
"""Configuration frameworks for integrating external optimization libraries with nevergrad's interface.
class Pymoo:
"""
Pymoo integration family.
Parameters:
- algorithm: Pymoo algorithm name
- termination: Termination criteria
- save_history: Save optimization history
"""import nevergrad as ng
# Create custom CMA-ES configuration
custom_cma = ng.families.ParametrizedCMA(
scale=1.0,
elitist=True,
diagonal=False,
popsize=lambda dim: 4 + int(3 * np.log(dim))
)
# Use with parametrization
param = ng.p.Array(shape=(10,))
optimizer = custom_cma(parametrization=param, budget=200)# Custom DE with specific parameters
custom_de = ng.families.DifferentialEvolution(
initialization="LHS", # Latin Hypercube Sampling
scale=0.8, # Mutation scale factor
crossover=0.9, # Crossover probability
popsize=50,
strategy="DE/rand/1"
)
optimizer = custom_de(parametrization=param, budget=100)# Sequential optimization: start with random search, then CMA-ES
chain = ng.families.Chaining([
ng.families.RandomSearchMaker(),
ng.families.ParametrizedCMA(diagonal=True)
], budgets=[50, 150]) # 50 evaluations for random, 150 for CMA
optimizer = chain(parametrization=param, budget=200)# Custom Bayesian optimization setup
custom_bo = ng.families.ParametrizedBO(
initialization="Hammersley", # Hammersley sequence initialization
utility_kind="ucb", # Upper Confidence Bound acquisition
utility_kappa=2.576, # 99% confidence level
middle_point=True
)
optimizer = custom_bo(parametrization=param, budget=100)# Create portfolio of different optimizers
portfolio = ng.families.ConfPortfolio(
optimizers=[
ng.families.ParametrizedCMA(diagonal=True),
ng.families.DifferentialEvolution(scale=0.5),
ng.families.ParametrizedBO()
],
weights=[0.4, 0.4, 0.2] # Allocation weights
)
optimizer = portfolio(parametrization=param, budget=300)# Neural network metamodel with CMA-ES backend
metamodel = ng.families.ParametrizedMetaModel(
model="neural",
acquisition="improvement",
multivariate_optimizer=ng.families.ParametrizedCMA()
)
optimizer = metamodel(parametrization=param, budget=150)# Quasi-random sampling with opposition learning
sampling = ng.families.SamplingSearch(
sampler="Halton",
scrambled=True,
opposition_mode="opposite",
autorescale=True
)
optimizer = sampling(parametrization=param, budget=100)Install with Tessl CLI
npx tessl i tessl/pypi-nevergrad