Comparing the (1+1)-CMA-ES with a mirrored (1+2)-CMA-ES with sequential selection on the noiseless BBOB-2010 testbed

  • Authors:
  • Anne Auger;Dimo Brockhoff;Nikolaus Hansen

  • Affiliations:
  • INRIA Saclay-Ile-de-France, Orsay, France;INRIA Saclay-Ile-de-France, Orsay, France;INRIA Saclay-Ile-de-France, Orsay, France

  • Venue:
  • Proceedings of the 12th annual conference companion on Genetic and evolutionary computation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we compare the (1+1)-CMA-ES to the (1+2sm)-CMA-ES, a recently introduced quasi-random (1+2)-CMA-ES that uses mirroring as derandomization technique as well as a sequential selection. Both algorithms were tested using independent restarts till a total number of function evaluations of $10^{4} D$ was reached, where $D$ is the dimension of the search space. On the non-separable ellipsoid function in dimension 10, 20 and 40, the performances of the (1+2sm)-CMA-ES are better by 17% than the best performance among algorithms tested during BBOB-2009 (for target values of 10-5 and 10-7). Moreover, the comparison shows that the (1+2sm)-CMA-ES variant improves the performance of the (1+1)-CMA-ES by about 20% on the ellipsoid, the discus, and the sum of different powers functions and by 12% on the sphere function. Besides, we never observe statistically significant results where the (1+2sm)-CMA-ES is worse than the (1+1)-CMA-ES.