Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Genetic and Evolutionary Computation Conference
Benchmarking the BFGS algorithm on the BBOB-2009 function testbed
Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers
Benchmarking the NEWUOA on the BBOB-2009 function testbed
Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers
Proceedings of the 12th annual conference companion on Genetic and evolutionary computation
Local meta-models for optimization using evolution strategies
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
Investigating the local-meta-model CMA-ES for large population sizes
EvoApplicatons'10 Proceedings of the 2010 international conference on Applications of Evolutionary Computation - Volume Part I
Proceedings of the 14th annual conference companion on Genetic and evolutionary computation
Hi-index | 0.00 |
This paper evaluates the performance of a variant of the local-meta-model CMA-ES (lmm-CMA) in the BBOB 2013 expensive setting. The lmm-CMA is a surrogate variant of the CMA-ES algorithm. Function evaluations are saved by building, with weighted regression, full quadratic meta-models to estimate the candidate solutions' function values. The quality of the approximation is appraised by checking how much the predicted rank changes when evaluating a fraction of the candidate solutions on the original objective function. The results are compared with the CMA-ES without meta-modeling and with previously benchmarked algorithms, namely BFGS, NEWUOA and saACM. It turns out that the additional meta-modeling improves the performance of CMA-ES on almost all BBOB functions while giving significantly worse results only on the attractive sector function. Over all functions, the performance is comparable with saACM and the lmm-CMA often outperforms NEWUOA and BFGS starting from about 2 times D2 function evaluations with D being the search space dimension.