The theory of evolution strategies
The theory of evolution strategies
The discrepancy method: randomness and complexity
The discrepancy method: randomness and complexity
Resampling and its Avoidance in Genetic Algorithms
EP '98 Proceedings of the 7th International Conference on Evolutionary Programming VII
Rigorous runtime analysis of a (μ+1)ES for the sphere function
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Convergence results for the (1, λ)-SA-ES using the theory of ϕ-irreducible Markov chains
Theoretical Computer Science
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
General lower bounds for evolutionary algorithms
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
DCMA: yet another derandomization in covariance-matrix-adaptation
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Why one must use reweighting in estimation of distribution algorithms
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Mirrored sampling and sequential selection for evolution strategies
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
Analyzing the impact of mirrored sampling and sequential selection in elitist evolution strategies
Proceedings of the 11th workshop proceedings on Foundations of genetic algorithms
Mirrored sampling in evolution strategies with weighted recombination
Proceedings of the 13th annual conference on Genetic and evolutionary computation
General lower bounds for evolutionary algorithms
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
Hi-index | 0.00 |
In this paper, we show universal lower bounds for isotropic algorithms, that hold for any algorithm such that each new point is the sum of one already visited point plus one random isotropic direction multiplied by any step size (whenever the step size is chosen by an oracle with arbitrarily high computational power). The bound is 1–O(1/d) for the constant in the linear convergence (i.e. the constant C such that the distance to the optimum after n steps is upper bounded by Cn), as already seen for some families of evolution strategies in [19,12], in contrast with 1–O(1) for the reverse case of a random step size and a direction chosen by an oracle with arbitrary high computational power. We then recall that isotropy does not uniquely determine the distribution of a sample on the sphere and show that the convergence rate in isotropic algorithms is improved by using stratified or antithetic isotropy instead of naive isotropy. We show at the end of the paper that beyond the mathematical proof, the result holds on experiments. We conclude that one should use antithetic-isotropy or stratified-isotropy, and never standard-isotropy.