Empirical hardness models: Methodology and a case study on combinatorial auctions

  • Authors:
  • Kevin Leyton-Brown;Eugene Nudelman;Yoav Shoham

  • Affiliations:
  • University of British Columbia, Vancouver, British Columbia, Canada;Stanford University, Stanford, California;Stanford University, Stanford, California

  • Venue:
  • Journal of the ACM (JACM)
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

Is it possible to predict how long an algorithm will take to solve a previously-unseen instance of an NP-complete problem? If so, what uses can be found for models that make such predictions? This article provides answers to these questions and evaluates the answers experimentally. We propose the use of supervised machine learning to build models that predict an algorithm's runtime given a problem instance. We discuss the construction of these models and describe techniques for interpreting them to gain understanding of the characteristics that cause instances to be hard or easy. We also present two applications of our models: building algorithm portfolios that outperform their constituent algorithms, and generating test distributions that emphasize hard problems. We demonstrate the effectiveness of our techniques in a case study of the combinatorial auction winner determination problem. Our experimental results show that we can build very accurate models of an algorithm's running time, interpret our models, build an algorithm portfolio that strongly outperforms the best single algorithm, and tune a standard benchmark suite to generate much harder problem instances.