Part ofa series on the |
Evolutionary algorithm |
---|
![]() |
Genetic algorithm (GA) |
Genetic programming (GP) |
Differential evolution |
Evolution strategy |
Evolutionary programming |
Related topics |
![]() | This article includes alist of references,related reading, orexternal links,but its sources remain unclear because it lacksinline citations. Please helpimprove this article byintroducing more precise citations.(March 2015) (Learn how and when to remove this message) |
Natural evolution strategies (NES) are a family ofnumerical optimization algorithms forblack box problems. Similar in spirit toevolution strategies, they iteratively update the (continuous) parameters of asearch distribution by following the natural gradient towards higher expected fitness.
The general procedure is as follows: theparameterized search distribution is used to produce a batch of search points, and thefitness function is evaluated at each such point. The distribution’s parameters (which includestrategy parameters) allow the algorithm to adaptively capture the (local) structure of the fitness function. For example, in the case of aGaussian distribution, this comprises the mean and thecovariance matrix. From the samples, NES estimates a search gradient on the parameters towards higher expected fitness. NES then performs a gradient ascent step along thenatural gradient, a second order method which, unlike the plain gradient, renormalizes the update with respect to uncertainty. This step is crucial, since it prevents oscillations, premature convergence, and undesired effects stemming from a given parameterization. The entire process reiterates until a stopping criterion is met.
All members of the NES family operate based on the same principles. They differ in the type ofprobability distribution and the gradientapproximation method used. Different search spaces require different search distributions; for example, in low dimensionality it can be highly beneficial to model the full covariance matrix. In high dimensions, on the other hand, a more scalable alternative is to limit the covariance to thediagonal only. In addition, highly multi-modal search spaces may benefit from moreheavy-tailed distributions (such asCauchy, as opposed to the Gaussian). A last distinction arises between distributions where we can analytically compute the natural gradient, and more general distributions where we need to estimate it from samples.
Let denote the parameters of the search distribution and the fitness function evaluated at. NES then pursues the objective of maximizing theexpected fitness under the search distribution
throughgradient ascent. The gradient can be rewritten as
that is, theexpected value of times thelog-derivatives at. In practice, it is possible to use theMonte Carlo approximation based on a finite number of samples
Finally, the parameters of the search distribution can be updated iteratively
Instead of using the plain stochastic gradient for updates, NESfollows thenatural gradient, which has been shown to possess numerous advantages over the plain (vanilla) gradient, e.g.:
The NES update is therefore
where is theFisher information matrix.The Fisher matrix can sometimes be computed exactly, otherwise it is estimated from samples, reusing the log-derivatives.
NES utilizesrank-based fitness shaping in order to render thealgorithm more robust, andinvariant under monotonicallyincreasing transformations of the fitness function. For this purpose, the fitness of the population is transformed into a set ofutility values. Let denote the ith best individual.Replacing fitness with utility, the gradient estimate becomes
The choice of utility function is a free parameter of the algorithm.
input:1repeat 2fordo//λ is the population size 3draw sample 4evaluate fitness 5calculate log-derivatives 6end 7assign the utilities// based on rank 8estimate the gradient 9estimate// or compute it exactly 10update parameters//η is the learning rate11until stopping criterion is met