Nevergrad - A gradient-free optimization platform
This documentation is a work in progress, feel free to help us update/improve/restucture it!
Quick start
nevergrad
is a Python 3.6+ library. It can be installed with:
pip install nevergrad
You can find other installation options (including for Windows users) in the Getting started section.
Feel free to join Nevergrad users Facebook group.
Minimizing a function using an optimizer (here NgIohTuned
, our adaptative optimization algorithm) can be easily run with:
import nevergrad as ng
def square(x):
return sum((x - 0.5) ** 2)
# optimization on x as an array of shape (2,)
optimizer = ng.optimizers.NGOpt(parametrization=2, budget=100)
recommendation = optimizer.minimize(square) # best value
print(recommendation.value)
# >>> [0.49999998 0.50000004]
Convergence of a population of points to the minima with two-points DE.
nevergrad
can also support bounded continuous variables as well as discrete variables, and mixture of those.
To do this, one can specify the input space:
import nevergrad as ng
def fake_training(learning_rate: float, batch_size: int, architecture: str) -> float:
# optimal for learning_rate=0.2, batch_size=4, architecture="conv"
return (learning_rate - 0.2) ** 2 + (batch_size - 4) ** 2 + (0 if architecture == "conv" else 10)
# Instrumentation class is used for functions with multiple inputs
# (positional and/or keywords)
parametrization = ng.p.Instrumentation(
# a log-distributed scalar between 0.001 and 1.0
learning_rate=ng.p.Log(lower=0.001, upper=1.0),
# an integer from 1 to 12
batch_size=ng.p.Scalar(lower=1, upper=12).set_integer_casting(),
# either "conv" or "fc"
architecture=ng.p.Choice(["conv", "fc"]),
)
optimizer = ng.optimizers.NGOpt(parametrization=parametrization, budget=100)
recommendation = optimizer.minimize(fake_training)
print(recommendation.kwargs) # shows the recommended keyword arguments of the function
# >>> {'learning_rate': 0.1998, 'batch_size': 4, 'architecture': 'conv'}
Learn more about parametrization in the Parametrization section!
- Getting started
- How to perform optimization
- Basic example
- Using several workers
- Ask and tell interface
- Choosing an optimizer
- Telling non-asked points, or suggesting points
- Adding callbacks
- Optimization with constraints
- Optimizing machine learning hyperparameters
- Example with permutation
- Example of chaining, or inoculation, or initialization of an evolutionary algorithm
- Multiobjective minimization with Nevergrad
- Reproducibility
- Parametrizing your optimization
- Examples - Nevergrad for machine learning
- Optimizers API Reference
- Optimizer API
- Callbacks
- Configurable optimizers
- Optimizers
AXP
BFGSCMA
BFGSCMAPlus
BayesOptim
CM
CMA
CMandAS2
CMandAS3
CSEC
CSEC10
CSEC11
Carola3
Chaining
ChoiceBase
ConfPSO
ConfPortfolio
ConfSplitOptimizer
ConfiguredPSO
EDA
EMNA
F2SQPCMA
F3SQPCMA
FSQPCMA
ForceMultiCobyla
LogBFGSCMA
LogBFGSCMAPlus
LogMultiBFGS
LogMultiBFGSPlus
LogSQPCMA
LogSQPCMAPlus
MEDA
MPCEDA
MetaCMA
MultiBFGS
MultiBFGSPlus
MultiCobyla
MultiCobylaPlus
MultiDiscrete
MultiSQP
MultiSQPPlus
MultipleSingleRuns
NGDSRW
NGO
NGOpt
NGOpt10
NGOpt12
NGOpt13
NGOpt14
NGOpt15
NGOpt16
NGOpt21
NGOpt36
NGOpt38
NGOpt39
NGOpt4
NGOpt8
NGOptBase
NGOptDSBase
NGOptF
NGOptF2
NGOptF3
NGOptF5
NGOptRW
NgDS
NgDS11
NgDS2
NgIoh
NgIoh10
NgIoh11
NgIoh12
NgIoh12b
NgIoh13
NgIoh13b
NgIoh14
NgIoh14b
NgIoh15
NgIoh15b
NgIoh16
NgIoh17
NgIoh18
NgIoh19
NgIoh2
NgIoh20
NgIoh21
NgIoh3
NgIoh4
NgIoh5
NgIoh6
NgIoh7
NgIoh8
NgIoh9
NgIohRW2
NgIohTuned
NoisyBandit
NoisySplit
PCEDA
ParametrizedBO
ParametrizedCMA
ParametrizedMetaModel
ParametrizedOnePlusOne
ParametrizedTBPSA
Portfolio
Rescaled
SPSA
SQPCMA
SQPCMAPlus
Shiwa
SplitOptimizer
SqrtBFGSCMA
SqrtBFGSCMAPlus
SqrtMultiBFGS
SqrtMultiBFGSPlus
SqrtSQPCMA
SqrtSQPCMAPlus
Wiz
cGA
rescaled()
smooth_copy()
- Parametrization API reference
- Running algorithm benchmarks
- Examples - Nevergrad for R
- Examples of benchmarks
- Noisy optimization
- One-shot optimization
- Comparison-based methods for ill-conditioned problems
- Ill-conditioned function
- Discrete
- List of benchmarks
basic()
compabasedillcond()
dim10_select_one_feature()
dim10_select_two_features()
dim10_smallbudget()
doe_dim4()
illcond()
metanoise()
noise()
oneshot1()
oneshot2()
oneshot3()
oneshot4()
repeated_basic()
adversarial_attack()
alldes()
aquacrop_fao()
bonnans()
causal_similarity()
ceviche()
complex_tsp()
constrained_illconditioned_parallel()
control_problem()
deceptive()
doe()
double_o_seven()
far_optimum_es()
fishing()
fiveshots()
harderparallel()
hdbo4d()
hdmultimodal()
illcondi()
illcondipara()
image_multi_similarity()
image_multi_similarity_cv()
image_multi_similarity_pgan()
image_multi_similarity_pgan_cv()
image_quality()
image_quality_cv()
image_quality_cv_pgan()
image_quality_pgan()
image_quality_proxy()
image_quality_proxy_pgan()
image_similarity()
image_similarity_and_quality()
image_similarity_and_quality_cv()
image_similarity_and_quality_cv_pgan()
image_similarity_and_quality_pgan()
image_similarity_pgan()
image_single_quality()
image_single_quality_pgan()
instrum_discrete()
keras_tuning()
lowbudget()
lsgo()
mixsimulator()
mlda()
mldakmeans()
mltuning()
mono_rocket()
morphing_pgan_quality()
ms_bbob()
multi_ceviche()
multi_ceviche_c0()
multi_ceviche_c0_warmstart()
multi_ceviche_c0p()
multimodal()
multiobjective_example()
multiobjective_example_hd()
multiobjective_example_many()
multiobjective_example_many_hd()
naive_seq_keras_tuning()
naive_seq_mltuning()
naive_veryseq_keras_tuning()
naivemltuning()
nano_naive_seq_mltuning()
nano_naive_veryseq_mltuning()
nano_seq_mltuning()
nano_veryseq_mltuning()
neuro_control_problem()
newdoe()
noisy()
nozp_noms_bbob()
olympus_emulators()
olympus_surfaces()
oneshot()
oneshot_mltuning()
paraalldes()
parahdbo4d()
parallel()
parallel_small_budget()
paramultimodal()
pbbob()
pbo_reduced_suite()
pbo_suite()
pbt()
photonics()
photonics2()
powersystems()
ranknoisy()
realworld()
reduced_yahdlbbbob()
refactor_optims()
rocket()
seq_keras_tuning()
seq_mltuning()
sequential_fastgames()
sequential_instrum_discrete()
sequential_topology_optimization()
simple_tsp()
skip_ci()
small_photonics()
small_photonics2()
smallbudget_lsgo()
spsa_benchmark()
team_cycling()
topology_optimization()
ultrasmall_photonics()
ultrasmall_photonics2()
unit_commitment()
veryseq_keras_tuning()
verysmall_photonics()
verysmall_photonics2()
yabbob()
yabigbbob()
yaboundedbbob()
yaboxbbob()
yaconstrainedbbob()
yahdbbob()
yahdlbbbob()
yahdnoisybbob()
yahdnoisysplitbbob()
yahdsplitbbob()
yamegapenbbob()
yamegapenbigbbob()
yamegapenboundedbbob()
yamegapenboxbbob()
yamegapenhdbbob()
yanoisybbob()
yanoisysplitbbob()
yaonepenbbob()
yaonepenbigbbob()
yaonepenboundedbbob()
yaonepenboxbbob()
yaonepennoisybbob()
yaonepenparabbob()
yaonepensmallbbob()
yaparabbob()
yapenbbob()
yapenboundedbbob()
yapenboxbbob()
yapennoisybbob()
yapenparabbob()
yapensmallbbob()
yasmallbbob()
yasplitbbob()
yatinybbob()
yatuningbbob()
yawidebbob()
zp_ms_bbob()
zp_pbbob()
- Examples - Working with Pyomo model
- Installation and configuration on Windows
- Contributing to Nevergrad
- Open Optimization Competition 2020
Citing
@misc{nevergrad,
author = {J. Rapin and O. Teytaud},
title = {{Nevergrad - A gradient-free optimization platform}},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://GitHub.com/FacebookResearch/Nevergrad}},
}
License
nevergrad
is released under the MIT license. See LICENSE for additional details about it, as well as our Terms of Use and Privacy Policy.
Copyright © Meta Platforms, Inc.