How Well Can Simple Metrics Represent the Performance of HPC Applications?

  • Authors:
  • Laura C. Carrington;Michael Laurenzano;Allan Snavely;Roy L. Campbell;Larry P. Davis

  • Affiliations:
  • San Diego Supercomputer Center, San Diego, CA;CSE Dept. University of California, San Diego;CSE Dept. University of California, San Diego;Army Research Laboratory, Major Shared Resource Center, Aberdeen Proving Ground, MD;High Performance Computing, Modernization Program Office, Arlington, VA

  • Venue:
  • SC '05 Proceedings of the 2005 ACM/IEEE conference on Supercomputing
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, a systematic study of the effects of complexity of prediction methodology on its accuracy for a set of real applications on a variety of HPC systems is performed. Results indicate that the use of any single, simple synthetic metric to predict performance does an inadequate job, and the use of a linear combination of these simple metrics with optimized weights also performs poorly. Better, however, are methodologies that rely on the convolution of an application "transfer function" based on tracing information with system performance data measured by simple benchmarks. This latter methodology can predict performance with an average accuracy of 80%, based on the current work.