Randomized algorithms
On the analysis of the (1+ 1) evolutionary algorithm
Theoretical Computer Science
On the Choice of the Mutation Probability for the (1+1) EA
PPSN VI Proceedings of the 6th International Conference on Parallel Problem Solving from Nature
Runtime Analysis of the (μ+1) EA on Simple Pseudo-Boolean Functions
Evolutionary Computation
Rigorous Runtime Analysis of Inversely Fitness Proportional Mutation Rates
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
On the utility of the population size for inversely fitness proportional mutation rates
Proceedings of the tenth ACM SIGEVO workshop on Foundations of genetic algorithms
Quasirandom evolutionary algorithms
Proceedings of the 12th annual conference on Genetic and evolutionary computation
IEEE Transactions on Evolutionary Computation
Analysis of evolutionary algorithms: from computational complexity analysis to algorithm engineering
Proceedings of the 11th workshop proceedings on Foundations of genetic algorithms
On the log-normal self-adaptation of the mutation rate in binary search spaces
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Mutation rates of the (1+1)-EA on pseudo-boolean functions of bounded epistasis
Proceedings of the 13th annual conference on Genetic and evolutionary computation
A method to derive fixed budget results from expected optimisation times
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Mutation rate matters even when optimizing monotonic functions
Evolutionary Computation
Fitness function distributions over generalized search neighborhoods in the q-ary hypercube
Evolutionary Computation
Hi-index | 0.00 |
We reconsider a classical problem, namely how the (1+1) evolutionary algorithm optimizes the LEADINGONES function. We prove that if a mutation probability of p is used and the problem size is n, then the optimization time is1/2p2 ((1 - p)-n+1 - (1 - p)). For the standard value of p ≅ 1/n, this is approximately 0.86n2. As our bound shows, this mutation probability is not optimal: For p ≅ 1.59/n, the optimization time drops by more than 16% to approximately 0.77n2. Our method also allows to analyze mutation probabilities depending on the current fitness (as used in artificial immune systems). Again, we derive an exact expression. Analysing it, we find a fitness dependent mutation probability that yields an expected optimization time of approximately 0.68n2, another 12% improvement over the optimal mutation rate. In particular, this is the first example where an adaptive mutation rate provably speeds up the computation time. In a general context, these results suggest that the final word on mutation probabilities in evolutionary computation is not yet spoken.