Metaheuristic

From Wikipedia, the free encyclopedia
  (Redirected from Meta-algorithm)
Jump to: navigation, search

In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a lower-level procedure or heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem, especially with incomplete or imperfect information or limited computation capacity.[1] Metaheuristics may make few assumptions about the optimization problem being solved, and so they may be usable for a variety of problems.[2]

Compared to optimization algorithms and iterative methods, metaheuristics do not guarantee that a globally optimal solution can be found on some class of problems.[2] Many metaheuristics implement some form of stochastic optimization, so that the solution found is dependent on the set of random variables generated.[1] By searching over a large set of feasible solutions, metaheuristics can often find good solutions with less computational effort than algorithms, iterative methods, or simple heuristics.[2] As such, they are useful approaches for optimization problems.[1] Several books and survey papers have been published on the subject.[1][2][3][4][5]

Most literature on metaheuristics is experimental in nature, describing empirical results based on computer experiments with the algorithms. But some formal theoretical results are also available, often on convergence and the possibility of finding the global optimum.[2] Enormously many metaheuristic methods have been published with claims of novelty and practical efficacy. Unfortunately, many of the publications have been of poor quality; flaws include vagueness, lack of conceptual elaboration, poor experiments, and ignorance of previous literature. The field also features high-quality research.[6]

Properties and classification[edit]

Different classifications of metaheuristics.

These are properties that characterize most metaheuristics:[2]

  • Metaheuristics are strategies that guide the search process.
  • The goal is to efficiently explore the search space in order to find near–optimal solutions.
  • Techniques which constitute metaheuristic algorithms range from simple local search procedures to complex learning processes.
  • Metaheuristic algorithms are approximate and usually non-deterministic.
  • Metaheuristics are not problem-specific.

There are a wide variety of metaheuristics[1] and a number of properties along which to classify them.[2]

One approach is to characterize the type of search strategy.[2] One type of search strategy is an improvement on simple local search algorithms; Metaheuristics of this type include simulated annealing, tabu search, iterated local search, variable neighborhood search, and GRASP.[2] The other type of search strategy has a learning component to the search; metaheuristics of this type include ant colony optimization, evolutionary computation, and genetic algorithms.[2]

Another classification dimension is single solution vs population-based searches.[2][5] Single solution approaches focus on modifying and improving a single candidate solution; single solution metaheuristics include simulated annealing, iterated local search, variable neighborhood search, and guided local search.[5] population-based approaches maintain and improve multiple candidate solutions, often using population characteristics to guide the search; population based metaheuristics include evolutionary computation, genetic algorithms, and particle swarm optimization.[5] Another category of metaheuristics is Swarm intelligence which is collective behavior of decentralized, self-organized agents in a population or swarm. Ant colony optimization,[7] particle swarm optimization.[5] and artificial bee colony [8] algorithms are examples of this category.

In addition to the serial algorithms above, there are hybrid and parallel metaheuristics.[5] A hybrid metaheuristic is one which combines a metaheuristic with other optimization approaches, such as algorithms from mathematical programming, constraint programming, and machine learning. Both components of a hybrid metaheuristic may run concurrently and exchange information to guide the search. A parallel metaheuristic is one which uses the techniques of parallel programming to run multiple metaheuristic searches in parallel; these may range from simple distributed schemes to concurrent search runs that interact to improve the overall solution.

Applications[edit]

Metaheuristics are used for combinatorial optimization in which an optimal solution is sought over a discrete search-space. An example problem is the travelling salesman problem where the search-space of candidate solutions grows faster than exponentially as the size of the problem increases, which makes an exhaustive search for the optimal solution infeasible. Additionally, multidimensional combinatorial problems, including most design problems in engineering[9] such as form-finding and behavior-finding, suffer from the curse of dimensionality, which also makes them infeasible for exhaustive search or analytical methods. Popular metaheuristics for combinatorial problems include simulated annealing by Kirkpatrick et al.,[10] genetic algorithms by Holland et al.,[11] scatter search[12] and tabu search[13] by Glover. Literature review on metaheuristic optimization,[14] suggested that it was Fred Glover who coined the word metaheuristics [15]

Contributions[edit]

Many different metaheuristics are in existence and new variants are continually being proposed. Some of the most significant contributions to the field are:

See also[edit]

References[edit]

  1. ^ a b c d e Bianchi, Leonora; Marco Dorigo, Luca Maria Gambardella, and Walter J. Gutjahr (2009). "A survey on metaheuristics for stochastic combinatorial optimization". Natural Computing: an international journal 8 (2): 239–287. doi:10.1007/s11047-008-9098-4. 
  2. ^ a b c d e f g h i j k Blum, C.; Roli, A. (2003). Metaheuristics in combinatorial optimization: Overview and conceptual comparison 35 (3). ACM Computing Surveys. pp. 268–308. 
  3. ^ Goldberg, D.E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Kluwer Academic Publishers. ISBN 0-201-15767-5. 
  4. ^ Glover, F.; Kochenberger, G.A. (2003). Handbook of metaheuristics 57. Springer, International Series in Operations Research & Management Science. ISBN 978-1-4020-7263-5. 
  5. ^ a b c d e f Talbi, E-G. (2009). Metaheuristics: from design to implementation. Wiley. ISBN 0-470-27858-7. 
  6. ^ Sörensen, Kenneth. "Metaheuristics—the metaphor exposed". International Transactions in Operational Research. doi:10.1111/itor.12001. 
  7. ^ a b M. Dorigo, Optimization, Learning and Natural Algorithms, PhD thesis, Politecnico di Milano, Italie, 1992.
  8. ^ Karaboga, Dervis (2010). "Artificial bee colony algorithm". Scholarpedia 5 (3): 6915. doi:10.4249/scholarpedia.6915. 
  9. ^ Tomoiagă B, Chindriş M, Sumper A, Sudria-Andreu A, Villafafila-Robles R. Pareto Optimal Reconfiguration of Power Distribution Systems Using a Genetic Algorithm Based on NSGA-II. Energies. 2013; 6(3):1439-1455.
  10. ^ a b Kirkpatrick, S.; Gelatt Jr., C.D.; Vecchi, M.P. (1983). "Optimization by Simulated Annealing". Science 220 (4598): 671–680. doi:10.1126/science.220.4598.671. PMID 17813860. 
  11. ^ a b Holland, J.H. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press. ISBN 0-262-08213-6. 
  12. ^ a b Glover, Fred (1977). "Heuristics for Integer programming Using Surrogate Constraints". Decision Sciences 8 (1): 156–166. doi:10.1111/j.1540-5915.1977.tb01074.x. 
  13. ^ a b Glover, F. (1986). "Future Paths for Integer Programming and Links to Artificial Intelligence". Computers and Operations Research 13 (5): 533–549. doi:10.1016/0305-0548(86)90048-1. 
  14. ^ X. S. Yang, Metaheuristic optimization, Scholarpedia, 6(8):11472 (2011).
  15. ^ Glover F., (1986). Future paths for integer programming and links to artificial intelligence, Computers and Operations Research,13,533-549 (1986).
  16. ^ Robbins, H.; Monro, S. (1951). "A Stochastic Approximation Method". Annals of Mathematical Statistics 22 (3): 400–407. doi:10.1214/aoms/1177729586. 
  17. ^ Barricelli, N.A. (1954). "Esempi numerici di processi di evoluzione". Methodos: 45–68. 
  18. ^ Rastrigin, L.A. (1963). "The convergence of the random search method in the extremal control of a many parameter system". Automation and Remote Control 24 (10): 1337–1342. 
  19. ^ Matyas, J. (1965). "Random optimization". Automation and Remote Control 26 (2): 246–253. 
  20. ^ Nelder, J.A.; Mead, R. (1965). "A simplex method for function minimization". Computer Journal 7: 308–313. doi:10.1093/comjnl/7.4.308. 
  21. ^ Fogel, L.; Owens, A.J.; Walsh, M.J. (1966). Artificial Intelligence through Simulated Evolution. Wiley. ISBN 0-471-26516-0. 
  22. ^ Hastings, W.K. (1970). "Monte Carlo Sampling Methods Using Markov Chains and Their Applications". Biometrika 57 (1): 97–109. doi:10.1093/biomet/57.1.97. 
  23. ^ Cavicchio, D.J. (1970). "Adaptive search using simulated evolution". Technical Report (University of Michigan, Computer and Communication Sciences Department). hdl:2027.42/4042. 
  24. ^ Kernighan, B.W.;Lin, S. (1970). "An efficient heuristic procedure for partitioning graphs". Bell System Technical Journal 49 (2): 291–307. doi:10.1002/j.1538-7305.1970.tb01770.x. 
  25. ^ Mercer, R.E.; Sampson, J.R. (1978). "Adaptive search using a reproductive metaplan". Kybernetes (The International Journal of Systems and Cybernetics) 7 (3): 215–228. doi:10.1108/eb005486. 
  26. ^ Smith, S.F. (1980). A Learning System Based on Genetic Adaptive Algorithms (PhD Thesis). University of Pittsburgh. 
  27. ^ Moscato, P. (1989). "On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms". Caltech Concurrent Computation Program (report 826). 
  28. ^ Wolpert, D.H.; Macready, W.G. (1995). "No free lunch theorems for search". Technical Report SFI-TR-95-02-010 (Santa Fe Institute). 
  29. ^ Igel, Christian, Toussaint, Marc (Jun 2003). "On classes of functions for which No Free Lunch results hold". Information Processing Letters (Elsevier North-Holland, Inc.) 86 (6): 317–321. doi:10.1016/S0020-0190(03)00222-9. ISSN 0020-0190. Retrieved 9 April 2013. 
  30. ^ Auger, Anne, Teytaud, Olivier (2010). "Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms". Algorithmica (Springer-Verlag) 57 (1): 121–146. doi:10.1007/s00453-008-9244-5. ISSN 0178-4617. Retrieved 9 April 2013. 
  31. ^ Stefan Droste, Thomas Jansen, Ingo Wegener (2002). "Optimization with Randomized Search Heuristics – The (A)NFL Theorem, Realistic Scenarios, and Difficult Functions". Theoretical Computer Science 287 (1): 131–144. doi:10.1016/s0304-3975(02)00094-4. Retrieved 9 April 2013. 

External links[edit]