A guided tour of Chernoff bounds
Information Processing Letters
Adaptation in natural and artificial systems
Adaptation in natural and artificial systems
Evolutionary computation: toward a new philosophy of machine intelligence
Evolutionary computation: toward a new philosophy of machine intelligence
Randomized algorithms
The metropolis algorithm for graph bisection
Discrete Applied Mathematics
A computational view of population genetics
Random Structures & Algorithms
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
On the analysis of the (1+ 1) evolutionary algorithm
Theoretical Computer Science
Optimal Mutation Rates in Genetic Search
Proceedings of the 5th International Conference on Genetic Algorithms
On the Choice of the Mutation Probability for the (1+1) EA
PPSN VI Proceedings of the 6th International Conference on Parallel Problem Solving from Nature
PPSN III Proceedings of the International Conference on Evolutionary Computation. The Third Conference on Parallel Problem Solving from Nature: Parallel Problem Solving from Nature
On the Analysis of Evolutionary Algorithms - A Proof That Crossover Really Can Help
ESA '99 Proceedings of the 7th Annual European Symposium on Algorithms
How mutation and selection solve long-path problems in polynomial expected time
Evolutionary Computation
SFCS '92 Proceedings of the 33rd Annual Symposium on Foundations of Computer Science
An Overview of Parameter Control Methods by Self-Adaptation in Evolutionary Algorithms
Fundamenta Informaticae
Practical performance models of algorithms in evolutionary program induction and other domains
Artificial Intelligence
A new method for modeling the behavior of finite population evolutionary algorithms
Evolutionary Computation
Hi-index | 0.00 |
Evolutionary algorithms are randomized search heuristics whose general variants have been successfully applied in black box optimization. In this scenario the function f to be optimized is not known in advance and knowledge on f can be obtained only by sampling search points a revealing the value of f(a). In order to analyze the behavior of different variants of evolutionary algorithms on certain functions f, the expected runtime until some optimal search point is sampled and the success probability, i.e., the probability that an optimal search point is among the first sampled points, are of particular interest. Here a simple method for the analysis is discussed and applied to several functions. For specific situations more involved techniques are necessary. Two such results are presented. First, it is shown that the most simple evolutionary algorithm optimizes each pseudo-boolean linear function in an expected time of O(n log n). Second, an example is shown where crossover decreases the expected runtime from superpolynomial to polynomial.