Nevergrad - A gradient-free optimization platform

_images/Nevergrad-LogoMark.png Support Ukraine - Help Provide Humanitarian Aid to Ukraine.

This documentation is a work in progress, feel free to help us update/improve/restucture it!

Quick start

nevergrad is a Python 3.6+ library. It can be installed with:

pip install nevergrad

You can find other installation options (including for Windows users) in the Getting started section.

Feel free to join Nevergrad users Facebook group.

Minimizing a function using an optimizer (here NgIohTuned, our adaptative optimization algorithm) can be easily run with:

import nevergrad as ng

def square(x):
    return sum((x - 0.5) ** 2)

# optimization on x as an array of shape (2,)
optimizer = ng.optimizers.NGOpt(parametrization=2, budget=100)
recommendation = optimizer.minimize(square)  # best value
print(recommendation.value)
# >>> [0.49999998 0.50000004]
_images/TwoPointsDE.gif

Convergence of a population of points to the minima with two-points DE.

nevergrad can also support bounded continuous variables as well as discrete variables, and mixture of those. To do this, one can specify the input space:

import nevergrad as ng

def fake_training(learning_rate: float, batch_size: int, architecture: str) -> float:
    # optimal for learning_rate=0.2, batch_size=4, architecture="conv"
    return (learning_rate - 0.2) ** 2 + (batch_size - 4) ** 2 + (0 if architecture == "conv" else 10)

# Instrumentation class is used for functions with multiple inputs
# (positional and/or keywords)
parametrization = ng.p.Instrumentation(
    # a log-distributed scalar between 0.001 and 1.0
    learning_rate=ng.p.Log(lower=0.001, upper=1.0),
    # an integer from 1 to 12
    batch_size=ng.p.Scalar(lower=1, upper=12).set_integer_casting(),
    # either "conv" or "fc"
    architecture=ng.p.Choice(["conv", "fc"]),
)

optimizer = ng.optimizers.NGOpt(parametrization=parametrization, budget=100)
recommendation = optimizer.minimize(fake_training)

print(recommendation.kwargs)  # shows the recommended keyword arguments of the function
# >>> {'learning_rate': 0.1998, 'batch_size': 4, 'architecture': 'conv'}

Learn more about parametrization in the Parametrization section!

Contents

Citing

@misc{nevergrad,
    author = {J. Rapin and O. Teytaud},
    title = {{Nevergrad - A gradient-free optimization platform}},
    year = {2018},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://GitHub.com/FacebookResearch/Nevergrad}},
}

License

nevergrad is released under the MIT license. See LICENSE for additional details about it, as well as our Terms of Use and Privacy Policy. Copyright © Meta Platforms, Inc.

Indices and tables