On the analysis of the (1+ 1) evolutionary algorithm
Theoretical Computer Science
On the Choice of the Offspring Population Size in Evolutionary Algorithms
Evolutionary Computation
Runtime Analysis of the (μ+1) EA on Simple Pseudo-Boolean Functions
Evolutionary Computation
Simulating a Random Walk with Constant Error
Combinatorics, Probability and Computing
Algorithmic analysis of a basic evolutionary algorithm for continuous optimization
Theoretical Computer Science
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Introducing Quasirandomness to Computer Science
Efficient Algorithms
SODA '10 Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms
Optimal fixed and adaptive mutation rates for the leadingones problem
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
Optimizing monotone functions can be difficult
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
General lower bounds for the running time of evolutionary algorithms
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
Sharp bounds by probability-generating functions and variable drift
Proceedings of the 13th annual conference on Genetic and evolutionary computation
How the (1+λ) evolutionary algorithm optimizes linear functions
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Motivated by recent successful applications of the concept of quasirandomness, we investigate to what extent such ideas can be used in evolutionary computation. To this aim, we propose different variations of the classical (1+1) evolutionary algorithm, all imitating the property that the (1+1) EA over intervals of time touches all bits roughly the same number of times. We prove bounds on the optimization time of these algorithms for the simple OneMax function. Surprisingly, none of the algorithms achieves the seemingly obvious reduction of the runtime from Θ(n log n) to O(n). On the contrary, one may even need Ω(n2) time. However, we also find that quasirandom ideas, if implemented correctly, can yield an over 50% speed-up.