Tradeoffs in the empirical evaluation of competing algorithm designs

  • Authors:
  • Frank Hutter;Holger H. Hoos;Kevin Leyton-Brown

  • Affiliations:
  • University of British Columbia, Vancouver, Canada V6T1Z4;University of British Columbia, Vancouver, Canada V6T1Z4;University of British Columbia, Vancouver, Canada V6T1Z4

  • Venue:
  • Annals of Mathematics and Artificial Intelligence
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose an empirical analysis approach for characterizing tradeoffs between different methods for comparing a set of competing algorithm designs. Our approach can provide insight into performance variation both across candidate algorithms and across instances. It can also identify the best tradeoff between evaluating a larger number of candidate algorithm designs, performing these evaluations on a larger number of problem instances, and allocating more time to each algorithm run. We applied our approach to a study of the rich algorithm design spaces offered by three highly-parameterized, state-of-the-art algorithms for satisfiability and mixed integer programming, considering six different distributions of problem instances. We demonstrate that the resulting algorithm design scenarios differ in many ways, with important consequences for both automatic and manual algorithm design. We expect that both our methods and our findings will lead to tangible improvements in algorithm design methods.