
| Part ofa series on the |
| Evolutionary algorithm |
|---|
| Genetic algorithm (GA) |
| Genetic programming (GP) |
| Differential evolution |
| Evolution strategy |
| Evolutionary programming |
| Related topics |
Evolutionary computation fromcomputer science is a family ofalgorithms forglobal optimization inspired bybiological evolution, and the subfield ofartificial intelligence andsoft computing studying these algorithms. In technical terms, they are a family of population-basedtrial and error problem solvers with ametaheuristic orstochastic optimization character.
In evolutionary computation, an initial set of candidate solutions is generated and iteratively updated. Each new generation is produced by stochastically removing less desired solutions, and introducing small random changes as well as, depending on the method, mixing parental information. In biological terminology, apopulation of solutions is subjected tonatural selection (orartificial selection),mutation and possiblyrecombination. As a result, the population will graduallyevolve to increase infitness, in this case the chosenfitness function of the algorithm.
Evolutionary computation techniques can produce highly optimized solutions in a wide range of problem settings, making them popular incomputer science. Many variants and extensions exist, suited to more specific families of problems and data structures. Evolutionary computation is also sometimes used inevolutionary biology as anin silico experimental procedure to study common aspects of general evolutionary processes.
The concept of mimicking evolutionary processes to solve problems originates before the advent of computers, such as whenAlan Turing proposed a method of genetic search in 1948 .[1] Turing's B-typeu-machines resemble primitiveneural networks, and connections between neurons were learnt via a sort ofgenetic algorithm. His P-typeu-machines resemble a method forreinforcement learning, where pleasure and pain signals direct the machine to learn certain behaviors. However, Turing's paper went unpublished until 1968, and he died in 1954, so this early work had little to no effect on the field of evolutionary computation that was to develop.[2]
Evolutionary computing as a field began in earnest in the 1950s and 1960s.[1] There were several independent attempts to use the process of evolution in computing at this time, which developed separately for roughly 15 years. Three branches emerged in different places to attain this goal:evolution strategies,evolutionary programming, andgenetic algorithms. A fourth branch,genetic programming, eventually emerged in the early 1990s. These approaches differ in the method of selection, the permitted mutations, and the representation of genetic data. By the 1990s, the distinctions between the historic branches had begun to blur, and the term 'evolutionary computing' was coined in 1991 to denote a field that exists over all four paradigms.[3]
In 1962,Lawrence J. Fogel initiated the research ofEvolutionary Programming in the United States, which was considered anartificial intelligence endeavor. In this system,finite state machines are used to solve a prediction problem: these machines would be mutated (adding or deleting states, or changing the state transition rules), and the best of these mutated machines would be evolved further in future generations. The final finite state machine may be used to generate predictions when needed. The evolutionary programming method was successfully applied to prediction problems, system identification, and automatic control. It was eventually extended to handle time series data and to model the evolution of gaming strategies.[3]
In 1964,Ingo Rechenberg andHans-Paul Schwefel introduce the paradigm ofevolution strategies in Germany.[3] Since traditionalgradient descent techniques produce results that may get stuck in local minima, Rechenberg and Schwefel proposed that random mutations (applied to all parameters of some solution vector) may be used to escape these minima. Child solutions were generated from parent solutions, and the more successful of the two was kept for future generations. This technique was first used by the two to successfully solve optimization problems influid dynamics.[4] Initially, this optimization technique was performed without computers, instead relying on dice to determine random mutations. By 1965, the calculations were performed wholly by machine.[3]
John Henry Holland introducedgenetic algorithms in the 1960s, and it was further developed at theUniversity of Michigan in the 1970s.[5] While the other approaches were focused on solving problems, Holland primarily aimed to use genetic algorithms to study adaptation and determine how it may be simulated. Populations of chromosomes, represented as bit strings, were transformed by an artificial selection process, selecting for specific 'allele' bits in the bit string. Among other mutation methods, interactions between chromosomes were used to simulate therecombination of DNA between different organisms. While previous methods only tracked a single optimal organism at a time (having children compete with parents), Holland's genetic algorithms tracked large populations (having many organisms compete each generation).
By the 1990s, a new approach to evolutionary computation that came to be calledgenetic programming emerged, advocated for byJohn Koza among others.[3] In this class of algorithms, the subject of evolution was itself a program written in ahigh-level programming language (there had been some previous attempts as early as 1958 to use machine code, but they met with little success). For Koza, the programs wereLispS-expressions, which can be thought of as trees of sub-expressions. This representation permits programs to swap subtrees, representing a sort of genetic mixing. Programs are scored based on how well they complete a certain task, and the score is used for artificial selection. Sequence induction, pattern recognition, and planning were all successful applications of the genetic programming paradigm.
Many other figures played a role in the history of evolutionary computing, although their work did not always fit into one of the major historical branches of the field. The earliest computational simulations ofevolution usingevolutionary algorithms andartificial life techniques were performed byNils Aall Barricelli in 1953, with first results published in 1954.[6] Another pioneer in the 1950s wasAlex Fraser, who published a series of papers on simulation ofartificial selection.[7] As academic interest grew, dramatic increases in the power of computers allowed practical applications, including the automatic evolution of computer programs.[8] Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers, and also to optimize the design of systems.[9][10]
Evolutionary computing techniques mostly involvemetaheuristicoptimizationalgorithms. Broadly speaking, the field includes:
Over recent years many dubious algorithms have been proposed, that are often just copies of existing algorithms (frequently Particle Swarm Optimization), where only the metaphor changed, but the algorithm itself is not new at all. A thorough catalogue with many of these dubious algorithms has been published in theEvolutionary Computation Bestiary.[11] It is also important to note that many of these dubiously 'novel' algorithms have poor experimental validation.[12]
Evolutionary algorithms form a subset of evolutionary computation in that they generally only involve techniques implementing mechanisms inspired bybiological evolution such asreproduction,mutation,recombination andnatural selection.Candidate solutions to the optimization problem play the role of individuals in a population, and thecost function determines the environment within which the solutions "live" (see alsofitness function).Evolution of thepopulation then takes place after the repeated application of the above operators.
In this process, there are two main forces that form the basis of evolutionary systems:Recombination (e.g.crossover) andmutation create the necessary diversity and thereby facilitate novelty, whileselection acts as a force increasing quality.
Many aspects of such an evolutionary process arestochastic. Changed pieces of information due to recombination and mutation are randomly chosen. On the other hand, selection operators can be either deterministic, or stochastic. In the latter case, individuals with a higherfitness have a higher chance to be selected than individuals with a lowerfitness, but typically even the weak individuals have a chance to become a parent or to survive.
Genetic algorithms deliver methods to modelbiological systems andsystems biology that are linked to the theory ofdynamical systems, since they are used to predict the future states of the system. This is just a vivid (but perhaps misleading) way of drawing attention to the orderly, well-controlled and highly structured character of development in biology.
However, the use of algorithms and informatics, in particular ofcomputational theory, beyond the analogy to dynamical systems, is also relevant to understand evolution itself.
This view has the merit of recognizing that there is no central control of development; organisms develop as a result of local interactions within and between cells. The most promising ideas about program-development parallels seem to us to be ones that point to an apparently close analogy between processes within cells, and the low-level operation of modern computers.[13] Thus, biological systems are like computational machines that process input information to compute next states, such that biological systems are closer to a computation than classical dynamical system.[14]
Furthermore, following concepts fromcomputational theory, micro processes in biological organisms are fundamentally incomplete and undecidable (completeness (logic)), implying that “there is more than a crude metaphor behind the analogy between cells and computers.[15]
The analogy to computation extends also to the relationship betweeninheritance systems and biological structure, which is often thought to reveal one of the most pressing problems in explaining the origins of life.
Evolutionary automata[16][17][18], a generalization ofEvolutionary Turing machines[19][20], have been introduced in order to investigate more precisely properties of biological and evolutionary computation. In particular, they allow to obtain new results on expressiveness of evolutionary computation[18][21]. This confirms the initial result about undecidability of natural evolution and evolutionary algorithms and processes.Evolutionary finite automata, the simplest subclass of Evolutionary automata working interminal mode can accept arbitrary languages over a given alphabet, including non-recursively enumerable (e.g., diagonalization language) and recursively enumerable but not recursive languages (e.g., language of the universal Turing machine)[22].
The list of active researchers is naturally dynamic and non-exhaustive. A network analysis of the community was published in 2007.[23]
While articles on or using evolutionary computation permeate the literature, several journals are dedicated to evolutionary computation:
The main conferences in the evolutionary computation area include
{{cite book}}: CS1 maint: others (link){{cite journal}}:Cite journal requires|journal= (help)