Evaluating optimization algorithms: bounds on the performance of optimizers on unseen problems

  • Authors:
  • David Corne;Alan Reynolds

  • Affiliations:
  • Heriot-Watt University, Edinburgh, United Kingdom;Heriot-Watt University, Edinburgh, United Kingdom

  • Venue:
  • Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this extended abstract, we look at the common practice of using optimization problem test suites to develop and/or evaluate optimization algorithms, and bring to bear on this practice a number of results available from computational learning theory. This enables optimization algorithm developers to express principled quantitative bounds on the likely performance of their algorithms on unseen problem instances, on the basis of details of their experimental design and empirical results on training or test instances. We first recap some relevant results from computational learning theory, and then describe how optimization development practice can be suitably recast in a way that enables these results to be applied. We then briefly discuss some related implications. An updated version of this article and associated material, including statistical tables relating to generalization bounds, are provided at http://is.gd/evalopt.