Algorithms (x, sigma, eta): quasi-random mutations for evolution strategies

  • Authors:
  • Anne Auger;Mohammed Jebalia;Olivier Teytaud

  • Affiliations:
  • CoLab, ETH Zentrum CAB F 84, Zürich, Switzerland;Equipe TAO – INRIA Futurs, LRI, Bât. 490, Université Paris-Sud, Orsay, France;Equipe TAO – INRIA Futurs, LRI, Bât. 490, Université Paris-Sud, Orsay, France

  • Venue:
  • EA'05 Proceedings of the 7th international conference on Artificial Evolution
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Randomization is an efficient tool for global optimization. We here define a method which keeps : – the order 0 of evolutionary algorithms (no gradient) ; – the stochastic aspect of evolutionary algorithms ; – the efficiency of so-called ”low-dispersion” points ; and which ensures under mild assumptions global convergence with linear convergence rate. We use i) sampling on a ball instead of Gaussian sampling (in a way inspired by trust regions), ii) an original rule for step-size adaptation ; iii) quasi-monte-carlo sampling (low dispersion points) instead of Monte-Carlo sampling. We prove in this framework linear convergence rates i) for global optimization and not only local optimization ; ii) under very mild assumptions on the regularity of the function (existence of derivatives is not required). Though the main scope of this paper is theoretical, numerical experiments are made to backup the mathematical results.