`` Direct Search'' Solution of Numerical and Statistical Problems
Journal of the ACM (JACM)
Not all linear functions are equally difficult for the compact genetic algorithm
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Convergence results for the (1, λ)-SA-ES using the theory of ϕ-irreducible Markov chains
Theoretical Computer Science
Upper and Lower Bounds for Randomized Search Heuristics in Black-Box Optimization
Theory of Computing Systems
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
Comparison-based algorithms are robust and randomized algorithms are anytime
Evolutionary Computation
On the hardness of offline multi-objective optimization
Evolutionary Computation
Lower Bounds for Evolution Strategies Using VC-Dimension
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
An Evolutionary Perspective on Approximate RDF Query Answering
SUM '08 Proceedings of the 2nd international conference on Scalable Uncertainty Management
Optimal robust expensive optimization is tractable
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Conditioning, halting criteria and choosing λ
EA'07 Proceedings of the Evolution artificielle, 8th international conference on Artificial evolution
Log-linear convergence and optimal bounds for the (1 + 1)-ES
EA'07 Proceedings of the Evolution artificielle, 8th international conference on Artificial evolution
Log(λ) modifications for optimal parallelism
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
Analyzing the impact of mirrored sampling and sequential selection in elitist evolution strategies
Proceedings of the 11th workshop proceedings on Foundations of genetic algorithms
Comparison-based complexity of multiobjective optimization
Proceedings of the 13th annual conference on Genetic and evolutionary computation
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
Natural evolution strategies converge on sphere functions
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Noisy optimization complexity under locality assumption
Proceedings of the twelfth workshop on Foundations of genetic algorithms XII
Hi-index | 0.01 |
Evolutionary optimization, among which genetic optimization, is a general framework for optimization. It is known (i) easy to use (ii) robust (iii) derivative-free (iv) unfortunately slow. Recent work [8] in particular show that the convergence rate of some widely used evolution strategies (evolutionary optimization for continuous domains) can not be faster than linear (i.e. the logarithm of the distance to the optimum can not decrease faster than linearly), and that the constant in the linear convergence (i.e. the constant C such that the distance to the optimum after n steps is upper bounded by Cn) unfortunately converges quickly to 1 as the dimension increases to ∞. We here show a very wide generalization of this result: all comparison-based algorithms have such a limitation. Note that our result also concerns methods like the Hooke & Jeeves algorithm, the simplex method, or any direct search method that only compares the values to previously seen values of the fitness. But it does not cover methods that use the value of the fitness (see [5] for cases in which the fitness-values are used), even if these methods do not use gradients. The former results deal with convergence with respect to the number of comparisons performed, and also include a very wide family of algorithms with respect to the number of function-evaluations. However, there is still place for faster convergence rates, for more original algorithms using the full ranking information of the population and not only selections among the population. We prove that, at least in some particular cases, using the full ranking information can improve these lower bounds, and ultimately provide superlinear convergence results.