Random number generation and quasi-Monte Carlo methods
Random number generation and quasi-Monte Carlo methods
The theory of evolution strategies
The theory of evolution strategies
Numerical Optimization of Computer Models
Numerical Optimization of Computer Models
On the analysis of the (1+ 1) evolutionary algorithm
Theoretical Computer Science
Perturbation Theory for Evolutionary Algorithms: Towards an Estimation of Convergence Speed
PPSN VI Proceedings of the 6th International Conference on Parallel Problem Solving from Nature
An Asymptotic Theory of Genetic Algorithms
AE '95 Selected Papers from the European conference on Artificial Evolution
Convergence results for the (1, λ)-SA-ES using the theory of ϕ-irreducible Markov chains
Theoretical Computer Science
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
How mutation and selection solve long-path problems in polynomial expected time
Evolutionary Computation
Rigorous hitting times for binary mutations
Evolutionary Computation
Holder functions and deception of genetic algorithms
IEEE Transactions on Evolutionary Computation
DCMA: yet another derandomization in covariance-matrix-adaptation
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Comparison-based algorithms are robust and randomized algorithms are anytime
Evolutionary Computation
Evolutionary optimization of low-discrepancy sequences
ACM Transactions on Modeling and Computer Simulation (TOMACS)
A rigorous runtime analysis for quasi-random restarts and decreasing stepsize
EA'11 Proceedings of the 10th international conference on Artificial Evolution
Hi-index | 0.00 |
Randomization is an efficient tool for global optimization. We here define a method which keeps : – the order 0 of evolutionary algorithms (no gradient) ; – the stochastic aspect of evolutionary algorithms ; – the efficiency of so-called ”low-dispersion” points ; and which ensures under mild assumptions global convergence with linear convergence rate. We use i) sampling on a ball instead of Gaussian sampling (in a way inspired by trust regions), ii) an original rule for step-size adaptation ; iii) quasi-monte-carlo sampling (low dispersion points) instead of Monte-Carlo sampling. We prove in this framework linear convergence rates i) for global optimization and not only local optimization ; ii) under very mild assumptions on the regularity of the function (existence of derivatives is not required). Though the main scope of this paper is theoretical, numerical experiments are made to backup the mathematical results.