Cooling schedules for optimal annealing
Mathematics of Operations Research
Randomized algorithms
Introduction to Algorithms
On the Analysis of Dynamic Restart Strategies for Evolutionary Algorithms
PPSN VII Proceedings of the 7th International Conference on Parallel Problem Solving from Nature
On the Optimization of Monotone Polynomials by Simple Randomized Search Heuristics
Combinatorics, Probability and Computing
Upper and Lower Bounds for Randomized Search Heuristics in Black-Box Optimization
Theory of Computing Systems
Simulated annealing beats metropolis in combinatorial optimization
ICALP'05 Proceedings of the 32nd international conference on Automata, Languages and Programming
A comparison of simulated annealing with a simple evolutionary algorithm
FOGA'05 Proceedings of the 8th international conference on Foundations of Genetic Algorithms
IEEE Transactions on Evolutionary Computation
Theoretical Computer Science
Evolutionary Computation
Evolving an autonomous agent for non-Markovian reinforcement learning
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Intelligent Social Media Indexing and Sharing Using an Adaptive Indexing Search Engine
ACM Transactions on Intelligent Systems and Technology (TIST)
Hi-index | 0.00 |
Simulated annealing and the (1+1) EA, a simple evolutionary algorithm, are both general randomized search heuristics that optimize any objective function with probability converging to 1. But they use very different techniques to achieve this global convergence. The (1+1) EA applies global mutations than can reach any point in the search space in one step together with an elitist selection mechanism. Simulated annealing restricts its search to a neighborhood but employs a randomized selection scheme where the probability for accepting a move to a new point in the search space depends on the difference in function values as well as on the current time step. Otherwise, the two algorithms are equal. It is known that the different philosophies of search implemented in the two heuristics can lead to exponential performance gaps between the two algorithms with respect to the expected optimization time. Even for very restricted classes of objective functions where the differences in function values between neighboring points are strictly limited the performance differences can be huge. Here, a more local point of view is taken. Considering obstacles in the fitness landscapes it is proven that the local performance of the two algorithms is remarkably similar in spite of their different search behaviors.