Is the meta-EA a viable optimization method?

  • Authors:
  • Sean Luke;AKM Khaled Ahsan Talukder

  • Affiliations:
  • George Mason University, Washington, DC, USA;George Mason University, Washington, DC, USA

  • Venue:
  • Proceedings of the 15th annual conference on Genetic and evolutionary computation
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Meta-evolutionary algorithms have long been proposed as an approach to automatically discover good parameter settings to use in later optimization runs. In this paper we instead ask whether a meta-evolutionary algorithm makes sense as an optimizer in its own right. That is, we're not interested in the resulting parameter settings, but only in the final result. As it so happens, this use of meta-EAs make sense in the context of large numbers of parallel runs, particularly in massive distributed scenarios. A primary issue facing meta-EAs is the stochastic nature of the meta-level fitness function. We consider whether this poses a challenge to establishing a gradient in the meta-level search space, and to what degree multiple tests are helpful in smoothing the noise. We discuss the nature of the meta-level search space and its impact on local optima, then examine the degree to which exploitation can be applied. We find that meta-EAs perform well as optimizers, and very surprisingly that they do best with only a single test. More exploitation appears to reduce performance, but only slightly.