MiDataSets: creating the conditions for a more realistic evaluation of Iterative optimization

  • Authors:
  • Grigori Fursin;John Cavazos;Michael O'Boyle;Olivier Temam

  • Affiliations:
  • ALCHEMY Group, INRIA Futurs and LRI, Paris-Sud University, France;Institute for Computing Systems Architecture, University of Edinburgh, UK;Institute for Computing Systems Architecture, University of Edinburgh, UK;ALCHEMY Group, INRIA Futurs and LRI, Paris-Sud University, France

  • Venue:
  • HiPEAC'07 Proceedings of the 2nd international conference on High performance embedded architectures and compilers
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Iterative optimization has become a popular technique to obtain improvements over the default settings in a compiler for performance-critical applications, such as embedded applications. An implicit assumption, however, is that the best configuration found for any arbitrary data set will work well with other data sets that a program uses. In this article, we evaluate that assumption based on 20 data sets per benchmark of the MiBench suite. We find that, though a majority of programs exhibit stable performance across data sets, the variability can significantly increase with many optimizations. However, for the best optimization configurations, we find that this variability is in fact small. Furthermore, we show that it is possible to find a compromise optimization configuration across data sets which is often within 5% of the best possible configuration for most data sets, and that the iterative process can converge in less than 20 iterations (for a population of 200 optimization configurations). All these conclusions have significant and positive implications for the practical utilization of iterative optimization.