A Framework for Computer Performance Evaluation Using Benchmark Sets

  • Authors:
  • Umesh Krishnaswamy;Isaac D. Scherson

  • Affiliations:
  • Juniper Networks, Sunnydale, CA;Univ. of California, Irvine

  • Venue:
  • IEEE Transactions on Computers
  • Year:
  • 2000

Quantified Score

Hi-index 14.98

Visualization

Abstract

Benchmarking is a widely used approach to measure computer performance. Current use of benchmarks only provides running times to describe the performance of a tested system. Glancing through these execution times provides little or no information about system strengths and weaknesses. A novel benchmarking methodology is proposed to identify key performance parameters; the methodology is based on measuring performance vectors. A performance vector is a vector of ratings that represents delivered performance of primitive operations of a system. In order to measure performance vectors, a geometric model is proposed which defines system behavior using the concepts of support points, context lattice, and operating points. In addition to the performance vector, other metrics derivable from the geometric model include the variation in system performance and the compliance of benchmarks. Using this methodology, the performance vectors of the Sun SuperSPARC (desktop workstation) and the Cray C90 (vector supercomputer) are evaluated using the SPEC benchmarks and the Perfect Club, respectively. The proposed methodology respects several practical constraints and issues in benchmarking. The instrumentation required is minimal. The benchmarks used are realistic (not synthetic) in order to reflect the delivered (not peak) performance. Finally, operations in the performance vector are not measured individually since there may be significant interplay in their executions.