Randomized algorithms
On the analysis of the (1+ 1) evolutionary algorithm
Theoretical Computer Science
Introduction to Algorithms
On the Optimization of Unimodal Functions with the (1 + 1) Evolutionary Algorithm
PPSN V Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
PPSN III Proceedings of the International Conference on Evolutionary Computation. The Third Conference on Parallel Problem Solving from Nature: Parallel Problem Solving from Nature
A study of drift analysis for estimating computation time of evolutionary algorithms
Natural Computing: an international journal
How mutation and selection solve long-path problems in polynomial expected time
Evolutionary Computation
Computing single source shortest paths using single-objective fitness
Proceedings of the tenth ACM SIGEVO workshop on Foundations of genetic algorithms
Fixed budget computations: a different perspective on run time analysis
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Approximating vertex cover using edge-based representations
Proceedings of the twelfth workshop on Foundations of genetic algorithms XII
A method to derive fixed budget results from expected optimisation times
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Mutation rate matters even when optimizing monotonic functions
Evolutionary Computation
Hi-index | 0.01 |
Evolutionary algorithms operating on bit strings usually employ a global mutation where each bit is flipped independently with some mutation probability. Most often the mutation probability is set fixed in a way that on average exactly one bit is flipped in a mutation. A seemingly very similar concept is a local one realized by an operator that flips exactly one bit chosen uniformly at random. Most known results indicate that the global approach leads to run-times at least as good as the local approach. The draw-back is that the global approach is much harder to analyze. It would therefore be highly useful to derive general principles of when and how results for the local operator extend to the global ones. In this paper, we show that there is little hope for such general principles, even under very favorable conditions. We show that there is a fitness function such that the local operator from each initial search point finds the optimum in small polynomial time, whereas the global operator for almost all initial search points needs a weakly exponential time.