Incomputer science andmathematical optimization, ametaheuristic is a higher-levelprocedure orheuristic designed to find, generate, tune, or select a heuristic (partialsearch algorithm) that may provide a sufficiently good solution to anoptimization problem or amachine learning problem, especially with incomplete or imperfect information or limited computation capacity.[1][2][3][4] Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems.[1][5][6] Their use is always of interest when exact or other (approximate) methods are not available or are not expedient, either because the calculation time is too long or because, for example, the solution provided is too imprecise.
Compared tooptimization algorithms anditerative methods, metaheuristics do not guarantee that aglobally optimal solution can be found on some class of problems.[4] Many metaheuristics implement some form ofstochastic optimization, so that the solution found is dependent on the set ofrandom variables generated.[3] Incombinatorial optimization, there are many problems that belong to the class ofNP-complete problems and thus can no longer be solved exactly in an acceptable time from a relatively low degree of complexity.[7][8] Metaheuristics then often provide good solutions with less computational effort than approximation methods, iterative methods, or simple heuristics.[4][1] This also applies in the field of continuous or mixed-integer optimization.[1][9][10] As such, metaheuristics are useful approaches for optimization problems.[3] Several books and survey papers have been published on the subject.[3][4][1][11][12] Literature review on metaheuristic optimization,[13] suggested that it was Fred Glover who coined the word metaheuristics.[14]
Most literature on metaheuristics is experimental in nature, describing empirical results based oncomputer experiments with the algorithms. But some formal theoretical results are also available, often onconvergence and the possibility of finding the global optimum.[4][15] Also worth mentioning are theno-free-lunch theorems, which state that there can be no metaheuristic that is better than all others for any given problem.
Especially since the turn of the millennium, many metaheuristic methods have been published with claims of novelty and practical efficacy. While the field also features high-quality research, many of the more recent publications have been of poor quality; flaws include vagueness, lack of conceptual elaboration, poor experiments, and ignorance of previous literature.[16][17]
These are properties that characterize most metaheuristics:[4]
Metaheuristics are strategies that guide the search process.
The goal is to efficiently explore the search space in order to find optimal or near–optimal solutions.
Techniques which constitute metaheuristic algorithms range fromsimple local search procedures to complex learning processes.
Metaheuristic algorithms are approximate and usually non-deterministic.
Metaheuristics are not problem-specific. However, they were often developed in relation to a problem class such as continuous[18][19] or combinatorial optimization[20] and then generalized later in some cases.[21][22]
They can draw on domain-specific knowledge in the form of heuristics that are controlled by a higher-level strategy of the metaheuristic.
They can contain mechanisms that prevent them from getting stuck in certain areas of the search space.
Modern metaheuristics often use the search history to control the search.
Euler diagram of the different classifications of metaheuristics[23]
There are a wide variety of metaheuristics[3][1] and a number of properties with respect to which to classify them.[4][24][25][26] The following list is therefore to be understood as an example.
One approach is to characterize the type of search strategy.[4] One type of search strategy is an improvement on simple local search algorithms. A well known local search algorithm is thehill climbing method which is used to find local optimums. However, hill climbing does not guarantee finding global optimum solutions.
A hybrid metaheuristic is one that combines a metaheuristic with other optimization approaches, such as algorithms frommathematical programming,constraint programming, andmachine learning. Both components of a hybrid metaheuristic may run concurrently and exchange information to guide the search.
On the other hand,Memetic algorithms[30] represent the synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search. An example of memetic algorithm is the use of a local search algorithm instead of or in addition to a basicmutation operator in evolutionary algorithms.
Aparallel metaheuristic is one that uses the techniques ofparallel programming to run multiple metaheuristic searches in parallel; these may range from simpledistributed schemes to concurrent search runs that interact to improve the overall solution.
With population-based metaheuristics, the population itself can be parallelized by either processing each individual or group with a separate thread or the metaheuristic itself runs on one computer and the offspring are evaluated in a distributed manner per iteration.[31] The latter is particularly useful if the computational effort for the evaluation is considerably greater than that for the generation of descendants. This is the case in many practical applications, especially in simulation-based calculations of solution quality.[32][33]
A very active area of research is the design of nature-inspired metaheuristics. Many recent metaheuristics, especially evolutionary computation-based algorithms, are inspired by natural systems. Nature acts as a source of concepts, mechanisms and principles for designing of artificial computing systems to deal with complex computational problems. Such metaheuristics includesimulated annealing,evolutionary algorithms,ant colony optimization andparticle swarm optimization.
A large number of more recent metaphor-inspired metaheuristics have started toattract criticism in the research community for hiding their lack of novelty behind an elaborate metaphor.[16][17][25][34] As a result, a number of renowned scientists of the field have proposed a research agenda for the standardization of metaheuristics in order to make them more comparable, among other things.[35] Another consequence is that the publication guidelines of a number of scientific journals have been adapted accordingly.[36][37][38]
Most metaheuristics are search methods and when using them, the evaluation function should be subject to greater demands than a mathematical optimization. Not only does the desired target state have to be formulated, but the evaluation should also reward improvements to a solution on the way to the target in order to support and accelerate the search process. Thefitness functions of evolutionary or memetic algorithms can serve as an example.
Metaheuristics are used for all types of optimization problems, ranging fromcontinuous through mixed integer problems tocombinatorial optimization or combinations thereof.[9][39][40] In combinatorial optimization, an optimal solution is sought over adiscrete search-space. An example problem is thetravelling salesman problem where the search-space of candidate solutions grows faster thanexponentially as the size of the problem increases, which makes anexhaustive search for the optimal solution infeasible.[41][42] Additionally, multidimensional combinatorial problems, including most design problems inengineering[6][43][44][45] such as form-finding and behavior-finding, suffer from thecurse of dimensionality, which also makes them infeasible for exhaustive search oranalytical methods.
Metaheuristics are also frequently applied to scheduling problems. A typical representative of this combinatorial task class is job shop scheduling, which involves assigning the work steps of jobs to processing stations in such a way that all jobs are completed on time and altogether in the shortest possible time.[5][46] In practice, restrictions often have to be observed, e.g. by limiting the permissible sequence of work steps of a job through predefined workflows[47] and/or with regard to resource utilisation, e.g. in the form of smoothing the energy demand.[48][49] Popular metaheuristics for combinatorial problems includegenetic algorithms by Holland et al.,[50] scatter search[51] andtabu search[52] by Glover.
Another large field of application are optimization tasks in continuous or mixed-integer search spaces. This includes, e.g., design optimization[6][53][54] or various engineering tasks.[55][56][57] An example of the mixture of combinatorial and continuous optimization is the planning of favourable motion paths for industrial robots.[58][59]
A MOF can be defined as ‘‘a set of software tools that provide a correct and reusable implementation of a set of metaheuristics, and the basic mechanisms to accelerate the implementation of its partner subordinate heuristics (possibly including solution encodings and technique-specific operators), which are necessary to solve a particular problem instance using techniques provided’’.[60]
There are many candidate optimization tools which can be considered as a MOF of varying feature. The following list of 33 MOFs is compared and evaluated in detail in:[60] Comet, EvA2, evolvica, Evolutionary::Algorithm, GAPlayground, jaga, JCLEC, JGAP, jMetal, n-genes, Open Beagle, Opt4j, ParadisEO/EO, Pisa, Watchmaker, FOM, Hypercube, HotFrame, Templar, EasyLocal, iOpt, OptQuest, JDEAL, Optimization Algorithm Toolkit, HeuristicLab, MAFRA, Localizer, GALIB, DREAM, Discropt, MALLBA, MAGMA, and UOF. There have been a number of publications on the support of parallel implementations, which was missing in this comparative study, particularly from the late 10s onwards.[32][33][61][62][63]
Many different metaheuristics are in existence and new variants are continually being proposed. Some of the most significant contributions to the field are:
1952: Robbins and Monro work on stochastic optimization methods.[64]
1954:Barricelli carries out the first simulations of theevolution process and uses them on general optimization problems.[65]
1990: Moscato and Fontanari,[77] and Dueck and Scheuer,[78] independently proposed a deterministic update rule forsimulated annealing which accelerated the search. This led to thethreshold accepting metaheuristic.
^abcdefGlover, F.; Kochenberger, G.A. (2003).Handbook of metaheuristics. Vol. 57. Springer, International Series in Operations Research & Management Science.ISBN978-1-4020-7263-5.
^Papadimitriou, Christos H.; Steiglitz, Kenneth (1998).Combinatorial Optimization: Algorithms and Complexity. Mineola, N.Y: Dover Publ., corrected, unabridged new edition of the work published by Prentice-Hall in 1982.ISBN978-0-486-40258-1.
^Colorni, Alberto; Dorigo, Marco; Maniezzo, Vittorio (1991), Varela, F.; Bourgine, P. (eds.),"Distributed Optimization by Ant Colonies",Conf. Proc. of ECAL91 - European Conference on Artificial Life, Amsterdam: Elsevier Publ., pp. 134–142,ISBN9780262720199{{citation}}: CS1 maint: work parameter with ISBN (link)
^Raidl, Günther R. (2006), Almeida, Francisco; Blesa Aguilera, María J.; Blum, Christian; Moreno Vega, José Marcos (eds.),"A Unified View on Hybrid Metaheuristics",Hybrid Metaheuristics, Lecture Notes in Computer Science, vol. 4030, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 1–12,doi:10.1007/11890584_1,ISBN978-3-540-46384-9, retrieved2024-08-24{{citation}}: CS1 maint: work parameter with ISBN (link)
^Lones, Michael A. (2014), Igel, Christian (ed.), "Metaheuristics in nature-inspired algorithms",Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation (GECCO'14), ACM Conferences, New York, NY: ACM, pp. 1419–1422,doi:10.1145/2598394.2609841,ISBN978-1-4503-2881-4{{citation}}: CS1 maint: work parameter with ISBN (link)
^Swan, Jerry; Adriaensen, Steven; Bishr, Mohamed; Burke, Edmund K.; Clark, John A.; De Causmaecke, Patrick; Durillo, Juan José; Hammond, Kevin; Hart, Emma; Johnson, Colin G.; Kocsis, Zoltan A.; Kovitz, Ben; Krawiec, Krzysztof; Martin, Simon; Merelo, Juan J.; Minku, Leandro L.; Özcan, Ender; Pappa, Gisele Lobo; Pesch, Erwin; García-Sánchez, Pablo; Schaerf, Andrea; Sim, Kevin; Smith, Jim; Stützle, Thomas; Wagner, Stefan (2015)."A Research Agenda for Metaheuristic Standardization"(PDF).Semantic Scholar.S2CID63728283. Retrieved2024-08-30.
^Dorigo, M.; Gambardella, L.M. (April 1997). "Ant colony system: a cooperative learning approach to the traveling salesman problem".IEEE Transactions on Evolutionary Computation.1 (1):53–66.doi:10.1109/4235.585892.
^Ganesan, T.; Elamvazuthi, I.; Ku Shaari, Ku Zilati; Vasant, P. (2013-03-01). "Swarm intelligence and gravitational search algorithm for multi-objective optimization of synthesis gas production".Applied Energy.103:368–374.Bibcode:2013ApEn..103..368G.doi:10.1016/j.apenergy.2012.09.059.
^Ganesan, T.; Elamvazuthi, I.; Vasant, P. (2011-11-01). "Evolutionary normal-boundary intersection (ENBI) method for multi-objective optimization of green sand mould system".2011 IEEE International Conference on Control System, Computing and Engineering. pp. 86–91.doi:10.1109/ICCSCE.2011.6190501.ISBN978-1-4577-1642-3.S2CID698459.
^abGlover, F. (1986). "Future Paths for Integer Programming and Links to Artificial Intelligence".Computers and Operations Research.13 (5):533–549.doi:10.1016/0305-0548(86)90048-1.
^abParejo, José Antonio; Ruiz-Cortés, Antonio; Lozano, Sebastián; Fernandez, Pablo (March 2012). "Metaheuristic optimization frameworks: a survey and benchmarking".Soft Computing.16 (3):527–561.doi:10.1007/s00500-011-0754-8.ISSN1432-7643.
^García-Valdez, Mario; Merelo, J.J. (2017-07-15), "evospace-js: asynchronous pool-based execution of heterogeneous metaheuristics",GECCO '17: Proceedings of the Genetic and Evolutionary Computation Conference, Companion, New York: ACM, pp. 1202–1208,doi:10.1145/3067695.3082473,ISBN978-1-4503-4939-0{{citation}}: CS1 maint: work parameter with ISBN (link)
^Nebro, Antonio J.; Barba-González, Cristóbal; Nieto, José García; Cordero, José A.; Montes, José F. Aldana (2017-07-15), "Design and architecture of the jMetaISP framework",GECCO '17: Proceedings of the Genetic and Evolutionary Computation Conference, Companion, New York: ACM, pp. 1239–1246,doi:10.1145/3067695.3082466,ISBN978-1-4503-4939-0{{citation}}: CS1 maint: work parameter with ISBN (link)
^Barricelli, N.A. (1954). "Esempi numerici di processi di evoluzione".Methodos:45–68.
^Rastrigin, L.A. (1963). "The convergence of the random search method in the extremal control of a many parameter system".Automation and Remote Control.24 (10):1337–1342.
^Cavicchio, D.J. (1970). "Adaptive search using simulated evolution".Technical Report. University of Michigan, Computer and Communication Sciences Department.hdl:2027.42/4042.