Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
f92fd3f
Inspiration from merged commit
gulshan-123 May 6, 2025
7c9e0ab
changed bound setting method to avoid nan and inf
gulshan-123 May 6, 2025
940f18f
Mutation added to avoid looking at inf, some parameter tuning
gulshan-123 May 6, 2025
2238658
Merge branch 'main' into nevergrad-OneplusOne
gauravmanmode May 23, 2025
80c7e63
is global=true and default gaussian
gulshan-123 May 25, 2025
4fd75fe
Finalise documentation with suggestions fixed
gulshan-123 May 25, 2025
b662d8c
added docs clarification and improve typehinting
gulshan-123 May 26, 2025
95a671c
Update algorithms.md
gauravmanmode Jun 23, 2025
d263653
Move documentation to docstring and fixes
gauravmanmode Jun 23, 2025
7395742
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 23, 2025
91110f5
Update refs.bib
gauravmanmode Jun 23, 2025
e695910
Update nevergrad_optimizers.py
gauravmanmode Jun 23, 2025
89d760f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 23, 2025
91540a7
Merge branch 'main' into nevergrad-OneplusOne
janosg Jun 30, 2025
4c9d284
support both None and "none"
gauravmanmode Jul 2, 2025
cde3030
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jul 2, 2025
b08d370
Update nevergrad_optimizers.py
gauravmanmode Jul 2, 2025
1cc28ed
Merge branch 'main' into nevergrad-OneplusOne
gauravmanmode Jul 9, 2025
710bba9
Merge branch 'main' into nevergrad-OneplusOne
gauravmanmode Jul 19, 2025
e7619bc
add decorator fields, move docstring to docs
gauravmanmode Jul 19, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 74 additions & 0 deletions docs/source/algorithms.md
Original file line number Diff line number Diff line change
Expand Up @@ -4124,6 +4124,80 @@ package. To use it, you need to have
- **n_restarts** (int): Number of times to restart the optimizer. Default is 1.
```

```{eval-rst}
.. dropdown:: nevergrad_oneplusone

.. code-block::

"nevergrad_oneplusone"

Minimize a scalar function using the One Plus One Evolutionary algorithm from Nevergrad.

THe One Plus One evolutionary algorithm iterates to find a set of parameters that minimizes the loss
function. It does this by perturbing, or mutating, the parameters from the last iteration (the
parent). If the new (child) parameters yield a better result, then the child becomes the new parent
whose parameters are perturbed, perhaps more aggressively. If the parent yields a better result, it
remains the parent and the next perturbation is less aggressive. Originally proposed by
:cite:`Rechenberg1973`. The implementation in Nevergrad is based on the one-fifth adaptation rule,
going back to :cite:`Schumer1968.

- **noise\_handling**: Method for handling the noise, can be
- "random": A random point is reevaluated regularly using the one-fifth adaptation rule.
- "optimistic": The best optimistic point is reevaluated regularly, embracing optimism in the face of uncertainty.
- A float coefficient can be provided to tune the regularity of these reevaluations (default is 0.05). Eg: with 0.05, each evaluation has a 5% chance (i.e., 1 in 20) of being repeated (i.e., the same candidate solution is reevaluated to better estimate its performance). (Default: `None`).
- **n\_cores**: Number of cores to use.

- **stopping.maxfun**: Maximum number of function evaluations.
- **mutation**: Type of mutation to apply. Available options are (Default: `"gaussian"`).
- "gaussian": Standard mutation by adding a Gaussian random variable (with progressive widening) to the best pessimistic point.
- "cauchy": Same as Gaussian but using a Cauchy distribution.
- "discrete": Mutates a randomly drawn variable (mutation occurs with probability 1/d in d dimensions, hence ~1 variable per mutation).
- "discreteBSO": Follows brainstorm optimization by gradually decreasing mutation rate from 1 to 1/d.
- "fastga": Fast Genetic Algorithm mutations from the current best.
- "doublefastga": Double-FastGA mutations from the current best :cite:`doerr2017`.
- "rls": Randomized Local Search — mutates one and only one variable.
- "portfolio": Random number of mutated bits, known as uniform mixing :cite:`dang2016`.
- "lengler": Mutation rate is a function of dimension and iteration index.
- "lengler{2|3|half|fourth}": Variants of the Lengler mutation rate adaptation.
- **sparse**: Whether to apply random mutations that set variables to zero. Default is `False`.
- **smoother**: Whether to suggest smooth mutations. Default is `False`.
- **annealing**:
Annealing schedule to apply to mutation amplitude or temperature-based control. Options are:
- "none": No annealing is applied.
- "Exp0.9": Exponential decay with rate 0.9.
- "Exp0.99": Exponential decay with rate 0.99.
- "Exp0.9Auto": Exponential decay with rate 0.9, auto-scaled based on problem horizon.
- "Lin100.0": Linear decay from 1 to 0 over 100 iterations.
- "Lin1.0": Linear decay from 1 to 0 over 1 iteration.
- "LinAuto": Linearly decaying annealing automatically scaled to the problem horizon. Default is `"none"`.
- **super\_radii**:
Whether to apply extended radii beyond standard bounds for candidate generation, enabling broader
exploration. Default is `False`.
- **roulette\_size**:
Size of the roulette wheel used for selection in the evolutionary process. Affects the sampling
diversity from past candidates. (Default: `64`)
- **antismooth**:
Degree of anti-smoothing applied to prevent premature convergence in smooth landscapes. This alters
the landscape by penalizing overly smooth improvements. (Default: `4`)
- **crossover**: Whether to include a genetic crossover step every other iteration. Default is `False`.
- **crossover\_type**:
Method used for genetic crossover between individuals in the population. Available options (Default: `"none"`):
- "none": No crossover is applied.
- "rand": Randomized selection of crossover point.
- "max": Crossover at the point with maximum fitness gain.
- "min": Crossover at the point with minimum fitness gain.
- "onepoint": One-point crossover, splitting the genome at a single random point.
- "twopoint": Two-point crossover, splitting the genome at two points and exchanging the middle section.
- **tabu\_length**:
Length of the tabu list used to prevent revisiting recently evaluated candidates in local search
strategies. Helps in escaping local minima. (Default: `1000`)
- **rotation**:
Whether to apply rotational transformations to the search space, promoting invariance to axis-
aligned structures and enhancing search performance in rotated coordinate systems. (Default:
`False`)
- **seed**: Seed for the random number generator for reproducibility.
```

## References

```{eval-rst}
Expand Down
42 changes: 42 additions & 0 deletions docs/source/refs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -927,6 +927,48 @@ @InProceedings{Zambrano2013
doi = {10.1109/CEC.2013.6557848},
}

@book{Rechenberg1973,
author = {Rechenberg, Ingo},
title = {Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution},
publisher = {Frommann-Holzboog Verlag},
year = {1973},
url = {https://gwern.net/doc/reinforcement-learning/exploration/1973-rechenberg.pdf},
address = {Stuttgart},
note = {[Evolution Strategy: Optimization of Technical Systems According to the Principles of Biological Evolution]}
}

@article{Schumer1968,
author={Schumer, M. and Steiglitz, K.},
journal={IEEE Transactions on Automatic Control},
title={Adaptive step size random search},
year={1968},
volume={13},
number={3},
pages={270-276},
keywords={Minimization methods;Gradient methods;Search methods;Adaptive control;Communication systems;Q measurement;Cost function;Newton method;Military computing},
doi={10.1109/TAC.1968.1098903}
}

@misc{doerr2017,
title={Fast Genetic Algorithms},
author={Benjamin Doerr and Huu Phuoc Le and Régis Makhmara and Ta Duy Nguyen},
year={2017},
eprint={1703.03334},
archivePrefix={arXiv},
primaryClass={cs.NE},
url={https://arxiv.org/abs/1703.03334},
}

@misc{dang2016,
title={Self-adaptation of Mutation Rates in Non-elitist Populations},
author={Duc-Cuong Dang and Per Kristian Lehre},
year={2016},
eprint={1606.05551},
archivePrefix={arXiv},
primaryClass={cs.NE},
url={https://arxiv.org/abs/1606.05551},
}

@Misc{Nogueira2014,
author={Fernando Nogueira},
title={{Bayesian Optimization}: Open source constrained global optimization tool for {Python}},
Expand Down
17 changes: 0 additions & 17 deletions src/optimagic/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
from typing import Type, cast

from optimagic.optimization.algorithm import Algorithm
from optimagic.optimizers.bayesian_optimizer import BayesOpt
from optimagic.optimizers.bhhh import BHHH
from optimagic.optimizers.fides import Fides
from optimagic.optimizers.iminuit_migrad import IminuitMigrad
Expand Down Expand Up @@ -367,7 +366,6 @@ def Scalar(self) -> BoundedGlobalGradientFreeNonlinearConstrainedScalarAlgorithm

@dataclass(frozen=True)
class BoundedGlobalGradientFreeScalarAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
nlopt_direct: Type[NloptDirect] = NloptDirect
Expand Down Expand Up @@ -1034,7 +1032,6 @@ def Local(self) -> GradientBasedLocalNonlinearConstrainedScalarAlgorithms:

@dataclass(frozen=True)
class BoundedGlobalGradientFreeAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
nlopt_direct: Type[NloptDirect] = NloptDirect
Expand Down Expand Up @@ -1099,7 +1096,6 @@ def Scalar(self) -> GlobalGradientFreeNonlinearConstrainedScalarAlgorithms:

@dataclass(frozen=True)
class GlobalGradientFreeScalarAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
nlopt_direct: Type[NloptDirect] = NloptDirect
Expand Down Expand Up @@ -1309,7 +1305,6 @@ def Scalar(self) -> BoundedGradientFreeNonlinearConstrainedScalarAlgorithms:

@dataclass(frozen=True)
class BoundedGradientFreeScalarAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
nlopt_bobyqa: Type[NloptBOBYQA] = NloptBOBYQA
Expand Down Expand Up @@ -1534,7 +1529,6 @@ def Scalar(self) -> BoundedGlobalNonlinearConstrainedScalarAlgorithms:

@dataclass(frozen=True)
class BoundedGlobalScalarAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
nlopt_direct: Type[NloptDirect] = NloptDirect
Expand Down Expand Up @@ -2147,7 +2141,6 @@ def Local(self) -> GradientBasedLikelihoodLocalAlgorithms:

@dataclass(frozen=True)
class GlobalGradientFreeAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
nlopt_direct: Type[NloptDirect] = NloptDirect
Expand Down Expand Up @@ -2234,7 +2227,6 @@ def Scalar(self) -> GradientFreeLocalScalarAlgorithms:

@dataclass(frozen=True)
class BoundedGradientFreeAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
nag_dfols: Type[NagDFOLS] = NagDFOLS
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
Expand Down Expand Up @@ -2332,7 +2324,6 @@ def Scalar(self) -> GradientFreeNonlinearConstrainedScalarAlgorithms:

@dataclass(frozen=True)
class GradientFreeScalarAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
neldermead_parallel: Type[NelderMeadParallel] = NelderMeadParallel
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
Expand Down Expand Up @@ -2456,7 +2447,6 @@ def Scalar(self) -> GradientFreeParallelScalarAlgorithms:

@dataclass(frozen=True)
class BoundedGlobalAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
nlopt_direct: Type[NloptDirect] = NloptDirect
Expand Down Expand Up @@ -2539,7 +2529,6 @@ def Scalar(self) -> GlobalNonlinearConstrainedScalarAlgorithms:

@dataclass(frozen=True)
class GlobalScalarAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
nlopt_direct: Type[NloptDirect] = NloptDirect
Expand Down Expand Up @@ -2854,7 +2843,6 @@ def Scalar(self) -> BoundedNonlinearConstrainedScalarAlgorithms:

@dataclass(frozen=True)
class BoundedScalarAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
Expand Down Expand Up @@ -3167,7 +3155,6 @@ def Scalar(self) -> GradientBasedScalarAlgorithms:

@dataclass(frozen=True)
class GradientFreeAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
nag_dfols: Type[NagDFOLS] = NagDFOLS
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
neldermead_parallel: Type[NelderMeadParallel] = NelderMeadParallel
Expand Down Expand Up @@ -3242,7 +3229,6 @@ def Scalar(self) -> GradientFreeScalarAlgorithms:

@dataclass(frozen=True)
class GlobalAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
nlopt_direct: Type[NloptDirect] = NloptDirect
Expand Down Expand Up @@ -3372,7 +3358,6 @@ def Scalar(self) -> LocalScalarAlgorithms:

@dataclass(frozen=True)
class BoundedAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
Expand Down Expand Up @@ -3510,7 +3495,6 @@ def Scalar(self) -> NonlinearConstrainedScalarAlgorithms:

@dataclass(frozen=True)
class ScalarAlgorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
Expand Down Expand Up @@ -3687,7 +3671,6 @@ def Scalar(self) -> ParallelScalarAlgorithms:

@dataclass(frozen=True)
class Algorithms(AlgoSelection):
bayes_opt: Type[BayesOpt] = BayesOpt
bhhh: Type[BHHH] = BHHH
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
Expand Down
Loading
Loading