Random number generation and quasi-Monte Carlo methods
Random number generation and quasi-Monte Carlo methods
The theory of evolution strategies
The theory of evolution strategies
Numerical Optimization of Computer Models
Numerical Optimization of Computer Models
On the analysis of the (1+ 1) evolutionary algorithm
Theoretical Computer Science
New Genetic Local Search Operators for the Traveling Salesman Problem
PPSN IV Proceedings of the 4th International Conference on Parallel Problem Solving from Nature
An Asymptotic Theory of Genetic Algorithms
AE '95 Selected Papers from the European conference on Artificial Evolution
Real-coded memetic algorithms with crossover hill-climbing
Evolutionary Computation - Special issue on magnetic algorithms
Convergence results for the (1, λ)-SA-ES using the theory of ϕ-irreducible Markov chains
Theoretical Computer Science
A Convergence Analysis of Unconstrained and Bound Constrained Evolutionary Pattern Search
Evolutionary Computation
How mutation and selection solve long-path problems in polynomial expected time
Evolutionary Computation
Rigorous hitting times for binary mutations
Evolutionary Computation
An evolutionary strategy for global minimization and its Markovchain analysis
IEEE Transactions on Evolutionary Computation
DCMA: yet another derandomization in covariance-matrix-adaptation
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Optimal contraction theorem for exploration-exploitation tradeoff in search and optimization
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Hi-index | 0.00 |
A Quasi-Monte-Carlo method based on the computation of a surrogate model of the fitness function is proposed, and its convergence at super-linear rate 3/2 is proved under rather mild assumptions on the fitness function -- but assuming that the starting point lies within a small neighborhood of a global maximum. A memetic algorithm is then constructed, that performs both a random exploration of the search space and the exploitation of the best-so-far points using the previous surrogate local algorithm, coupled through selection. Under the same mild hypotheses, the global convergence of the memetic algorithm, at the same 3/2 rate, is proved.