Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Evolution strategy

From Wikipedia, the free encyclopedia
Algorithm in computer science
Part ofa series on the
Evolutionary algorithm
Genetic algorithm (GA)
Genetic programming (GP)
Differential evolution
Evolution strategy
Evolutionary programming
Related topics
For a specific strategy or a set of strategies in game theory, seeEvolutionary strategy.

Evolution strategy (ES) from computer science is a subclass ofevolutionary algorithms, which serves as anoptimization technique.[1] It uses the majorgenetic operatorsmutation,recombination andselection of parents.[2]

History

[edit]

The 'evolution strategy' optimization technique was created in the early 1960s and developed further in the 1970s and later byIngo Rechenberg,Hans-Paul Schwefel and their co-workers.[1]

Timeline of ES - selected algorithms[1]
YearDescriptionReference
1973ES introduced with mutation and selection[3]
1994Derandomized self-adaptation ES - Derandomized scheme of mutative step size control is used[4]
1994CSA-ES - usage information from the old generations[5]
2001CMA-ES[6]
2006Weighted multi-recombination ES - usage of weighted recombination[7]
2007Meta-ES - incremental aggregation of partial semantic structures[8]
2008Natural ES - usage of natural gradient[9]
2010Exponential natural ES - a simpler version of natural ES[10]
2014Limited memory CMA-ES - time–memory complexity reduction by covariance matrix decomposition[11]
2016Fitness inheritance CMA-ES - fitness evaluation computational cost reduction using fitness inheritance[12]
2017RS-CMSA ES - usage of subpopulations[13]
2017MA-ES - COV update and COV matrix square root are not used[14]
2018Weighted ES - weighted recombination of general convex quadratic functions[15]

Methods

[edit]

Evolution strategies use natural problem-dependent representations, soproblem space andsearch space are identical. In common withevolutionary algorithms, the operators are applied in a loop. An iteration of the loop is called a generation. The sequence of generations is continued until a termination criterion is met.

The special feature of the ES is the self-adaptation of mutation step sizes and thecoevolution associated with it. The ES is briefly presented using the standard form,[16][17][18] pointing out that there are many variants.[19][20][21][22] The real-valued chromosome contains, in addition to then{\displaystyle n} decision variables,n{\displaystyle n'} mutation step sizesσj{\displaystyle {\sigma }_{j}}, where:1jnn{\displaystyle 1\leq j\leq n'\leq n}. Often one mutation step size is used for all decision variables or each has its own step size. Mate selection to produceλ{\displaystyle \lambda } offspring is random, i.e. independent of fitness. First, new mutation step sizes are generated per mating byintermediate recombination of the parentalσj{\displaystyle {\sigma }_{j}} with subsequent mutation as follows:

σj=σje(N(0,1)Nj(0,1)){\displaystyle {\sigma }'_{j}=\sigma _{j}\cdot e^{({\mathcal {N}}(0,1)-{\mathcal {N}}_{j}(0,1))}}

whereN(0,1){\displaystyle {\mathcal {N}}(0,1)} is anormally distributed random variable with mean0{\displaystyle 0} and standard deviation1{\displaystyle 1}.N(0,1){\displaystyle {\mathcal {N}}(0,1)} applies to allσj{\displaystyle {\sigma }'_{j}}, whileNj(0,1){\displaystyle {\mathcal {N}}_{j}(0,1)} is newly determined for eachσj{\displaystyle {\sigma }'_{j}}. Next,discrete recombination of the decision variables is followed by amutation using the new mutation step sizes as standard deviations of the normal distribution. The new decision variablesxj{\displaystyle x_{j}'} are calculated as follows:

xj=xj+Nj(0,σj){\displaystyle x_{j}'=x_{j}+{\mathcal {N}}_{j}(0,{\sigma }_{j}')}

This results in an evolutionary search on two levels: First, at the problem level itself and second, at the mutation step size level. In this way, it can be ensured that the ES searches for its target in ever finer steps. However, there is also the danger of being able to skip larger invalid areas in the search space only with difficulty.

Variants

[edit]

The ES knows two variants of best selection for the generation of the next parent population (μ{\displaystyle \mu } - number of parents,λ{\displaystyle \lambda } - number of offspring):[2]

Bäck and Schwefel recommend that the value ofλ{\displaystyle \lambda } should be approximately seven times theμ{\displaystyle \mu },[17] wherebyμ{\displaystyle \mu } must not be chosen too small because of the strong selection pressure. Suitable values forμ{\displaystyle \mu } are application-dependent and must be determined experimentally. The selection of the next generation in evolution strategies is deterministic and only based on the fitness rankings, not on the actual fitness values. The resulting algorithm is therefore invariant with respect to monotonic transformations of the objective function.

The simplest and oldest[1] evolution strategy(1+1){\displaystyle {\mathit {(1+1)}}} operates on a population of size two: the current point (parent) and the result of its mutation. Only if the mutant's fitness is at least as good as the parent one, it becomes the parent of the next generation. Otherwise the mutant is disregarded. More generally,λ{\displaystyle \lambda } mutants can be generated and compete with the parent, called(1+λ){\displaystyle (1+\lambda )}. In(1,λ){\displaystyle (1,\lambda )} the best mutant becomes the parent of the next generation while the current parent is always disregarded. For some of these variants, proofs oflinear convergence (in astochastic sense) have been derived on unimodal objective functions.[23][24]

Individual step sizes for each coordinate, or correlations between coordinates, which are essentially defined by an underlyingcovariance matrix, are controlled in practice either by self-adaptation or by covariance matrix adaptation (CMA-ES).[21] When the mutation step is drawn from amultivariate normal distribution using an evolvingcovariance matrix, it has been hypothesized that this adapted matrix approximates the inverseHessian of the search landscape. This hypothesis has been proven for a static model relying on a quadratic approximation.[25] In 2025, Chen et.al.[26] proposed a multi-agent evolution strategy for consensus-based distributed optimization, where a novel step adaptation method is designed to help multiple agents control the step size cooperatively.

See also

[edit]

References

[edit]
  1. ^abcdSlowik, Adam; Kwasnicka, Halina (1 August 2020)."Evolutionary algorithms and their applications to engineering problems".Neural Computing and Applications.32 (16):12363–12379.doi:10.1007/s00521-020-04832-8.ISSN 1433-3058.
  2. ^abAlrashdi, Zaid; Sayyafzadeh, Mohammad (1 June 2019)."(μ+λ) Evolution strategy algorithm in well placement, trajectory, control and joint optimisation".Journal of Petroleum Science and Engineering.177:1042–1058.Bibcode:2019JPSE..177.1042A.doi:10.1016/j.petrol.2019.02.047.ISSN 0920-4105.
  3. ^Vent, W. (January 1975). "Rechenberg, Ingo, Evolutionsstrategie — Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. 170 S. mit 36 Abb. Frommann-Holzboog-Verlag. Stuttgart 1973. Broschiert".Feddes Repertorium.86 (5): 337.doi:10.1002/fedr.19750860506.
  4. ^Ostermeier, Andreas; Gawelczyk, Andreas; Hansen, Nikolaus (December 1994). "A Derandomized Approach to Self-Adaptation of Evolution Strategies".Evolutionary Computation.2 (4):369–380.doi:10.1162/evco.1994.2.4.369.
  5. ^Ostermeier, Andreas; Gawelczyk, Andreas; Hansen, Nikolaus (1994)."Step-size adaptation based on non-local use of selection information".Parallel Problem Solving from Nature — PPSN III. Lecture Notes in Computer Science. Vol. 866. Springer. pp. 189–198.doi:10.1007/3-540-58484-6_263.ISBN 978-3-540-58484-1.
  6. ^Hansen, Nikolaus; Ostermeier, Andreas (June 2001). "Completely Derandomized Self-Adaptation in Evolution Strategies".Evolutionary Computation.9 (2):159–195.doi:10.1162/106365601750190398.PMID 11382355.
  7. ^Arnold, Dirk V. (28 August 2006)."Weighted multirecombination evolution strategies".Theoretical Computer Science.361 (1):18–37.doi:10.1016/j.tcs.2006.04.003.ISSN 0304-3975.
  8. ^Jung, Jason J.; Jo, Geun-Sik; Yeo, Seong-Won (2007)."Meta-evolution Strategy to Focused Crawling on Semantic Web".Artificial Neural Networks – ICANN 2007. Lecture Notes in Computer Science. Vol. 4669. Springer. pp. 399–407.doi:10.1007/978-3-540-74695-9_41.ISBN 978-3-540-74693-5.
  9. ^Wierstra, Daan; Schaul, Tom; Glasmachers, Tobias; Sun, Yi; Peters, Jan; Schmidhuber, Jürgen (1 January 2014)."Natural evolution strategies".J. Mach. Learn. Res.15 (1):949–980.ISSN 1532-4435.
  10. ^Glasmachers, Tobias; Schaul, Tom; Yi, Sun; Wierstra, Daan; Schmidhuber, Jürgen (7 July 2010)."Exponential natural evolution strategies".Proceedings of the 12th annual conference on Genetic and evolutionary computation(PDF). Association for Computing Machinery. pp. 393–400.doi:10.1145/1830483.1830557.ISBN 978-1-4503-0072-8.
  11. ^Loshchilov, Ilya (12 July 2014). "A computationally efficient limited memory CMA-ES for large scale optimization".Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation. Association for Computing Machinery. pp. 397–404.arXiv:1404.5520.doi:10.1145/2576768.2598294.ISBN 978-1-4503-2662-9.
  12. ^Liaw, Rung-Tzuo; Ting, Chuan-Kang (July 2016). "Enhancing covariance matrix adaptation evolution strategy through fitness inheritance".2016 IEEE Congress on Evolutionary Computation (CEC). pp. 1956–1963.doi:10.1109/CEC.2016.7744027.ISBN 978-1-5090-0623-6.
  13. ^Ahrari, Ali; Deb, Kalyanmoy; Preuss, Mike (September 2017). "Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations".Evolutionary Computation.25 (3):439–471.doi:10.1162/evco_a_00182.hdl:1887/76707.PMID 27070282.
  14. ^Beyer, Hans-Georg; Sendhoff, Bernhard (October 2017). "Simplify Your Covariance Matrix Adaptation Evolution Strategy".IEEE Transactions on Evolutionary Computation.21 (5):746–759.Bibcode:2017ITEC...21..746B.doi:10.1109/TEVC.2017.2680320.
  15. ^Akimoto, Youhei; Auger, Anne; Hansen, Nikolaus (6 September 2020)."Quality gain analysis of the weighted recombination evolution strategy on general convex quadratic functions".Theoretical Computer Science.832:42–67.arXiv:1608.04813.doi:10.1016/j.tcs.2018.05.015.ISSN 0304-3975.
  16. ^Schwefel, Hans-Paul (1995).Evolution and Optimum Seeking. Sixth-generation computer technology series. New York: Wiley.ISBN 978-0-471-57148-3.
  17. ^abBäck, Thomas; Schwefel, Hans-Paul (1993)."An Overview of Evolutionary Algorithms for Parameter Optimization".Evolutionary Computation.1 (1):1–23.doi:10.1162/evco.1993.1.1.1.ISSN 1063-6560.
  18. ^Schwefel, Hans-Paul; Rudolph, Günter; Bäck, Thomas (1995), Morán, Frederico; Moreno, Alvaro; Merelo, J.J.; Chacón, Pablo (eds.),"Contemporary Evolution Strategies",Proceedings of the Third European Conference on Advances in Artificial Life, Berlin, Heidelberg: Springer, pp. 893–907,doi:10.1007/3-540-59496-5_351,ISBN 978-3-540-59496-3{{citation}}: CS1 maint: work parameter with ISBN (link)
  19. ^Bäck, Thomas; Hoffmeister, Frank; Schwefel, Hans-Paul (1991), Belew, Richard K.; Booker, Lashon B. (eds.), "A Survey of Evolution Strategies",Proceedings of the Fourth International Conference on Genetic Algorithms (ICGA), San Mateo, Calif: Morgan Kaufmann,ISBN 978-1-55860-208-3{{citation}}: CS1 maint: work parameter with ISBN (link)
  20. ^Coelho, V. N.; Coelho, I. M.; Souza, M. J. F.; Oliveira, T. A.; Cota, L. P.; Haddad, M. N.; Mladenovic, N.; Silva, R. C. P.; Guimarães, F. G. (2016)."Hybrid Self-Adaptive Evolution Strategies Guided by Neighborhood Structures for Combinatorial Optimization Problems".Evolutionary Computation.24 (4):637–666.doi:10.1162/EVCO_a_00187.ISSN 1063-6560.PMID 27258842.
  21. ^abHansen, Nikolaus; Ostermeier, Andreas (2001)."Completely Derandomized Self-Adaptation in Evolution Strategies".Evolutionary Computation.9 (2):159–195.doi:10.1162/106365601750190398.ISSN 1063-6560.PMID 11382355.
  22. ^Hansen, Nikolaus; Kern, Stefan (2004), Yao, Xin; Burke, Edmund K.; Lozano, José A.; Smith, Jim (eds.),"Evaluating the CMA Evolution Strategy on Multimodal Test Functions",Parallel Problem Solving from Nature - PPSN VIII, vol. 3242, Berlin, Heidelberg: Springer, pp. 282–291,Bibcode:2004LNCS.3242..282H,doi:10.1007/978-3-540-30217-9_29,ISBN 978-3-540-23092-2{{citation}}: CS1 maint: work parameter with ISBN (link)
  23. ^Auger, Anne (April 2005). "Convergence results for the ( 1 , λ ) -SA-ES using the theory of ϕ -irreducible Markov chains".Theoretical Computer Science.334 (1–3):35–69.doi:10.1016/j.tcs.2004.11.017.
  24. ^Jägersküpper, Jens (August 2006). "How the (1+1) ES using isotropic mutations minimizes positive definite quadratic forms".Theoretical Computer Science.361 (1):38–56.doi:10.1016/j.tcs.2006.04.004.
  25. ^Shir, Ofer M.; Yehudayoff, Amir (January 2020). "On the covariance-Hessian relation in evolution strategies".Theoretical Computer Science.801:157–174.arXiv:1806.03674.doi:10.1016/j.tcs.2019.09.002.
  26. ^Chen, Tai-You; Chen, Wei-Neng; Hao, Jin-Kao; Wang, Yang; Zhang, Jun (2025). "Multi-Agent Evolution Strategy With Cooperative and Cumulative Step Adaptation for Black-Box Distributed Optimization".IEEE Transactions on Evolutionary Computation.29 (6):2819–2833.Bibcode:2025ITEC...29.2819C.doi:10.1109/TEVC.2025.3525713.ISSN 1941-0026.

Bibliography

[edit]

Research centers

[edit]
Main Topics
Algorithms
Related techniques
Metaheuristic methods
Related topics
Organizations
Conferences
Journals
Retrieved from "https://en.wikipedia.org/w/index.php?title=Evolution_strategy&oldid=1335354746"
Category:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp