Evolutionary Optimization Guided by Entropy-Based Discretization

  • Authors:
  • Guleng Sheri;David W. Corne

  • Affiliations:
  • Department of Computer Science, Heriot-Watt University, Ediburgh, UK;Department of Computer Science, Heriot-Watt University, Ediburgh, UK

  • Venue:
  • EvoWorkshops '09 Proceedings of the EvoWorkshops 2009 on Applications of Evolutionary Computing: EvoCOMNET, EvoENVIRONMENT, EvoFIN, EvoGAMES, EvoHOT, EvoIASP, EvoINTERACTION, EvoMUSART, EvoNUM, EvoSTOC, EvoTRANSLOG
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Learnable Evolution Model (LEM) involves alternating periods of optimization and learning, performa extremely well on a range of problems, a specialises in achieveing good results in relatively few function evaluations. LEM implementations tend to use sophisticated learning strategies. Here we continue an exploration of alternative and simpler learning strategies, and try Entropy-based Discretization (ED), whereby, for each parameter in the search space, we infer from recent evaluated samples what seems to be a `good' interval. We find that LEM(ED) provides significant advantages in both solution speed and quality over the unadorned evolutionary algorithm, and is usually superior to CMA-ES when the number of evaluations is limited. It is interesting to see such improvement gained from an easily-implemented approach. LEM(ED) can be tentatively recommended for trial on problems where good results are needed in relatively few fitness evaluations, while it is open to several routes of extension and further sophistication. Finally, results reported here are not based on a modern function optimization suite, but ongoing work confirms that our findings remain valid for non-separable functions.