Computer benchmarking: paths and pitfalls
IEEE Spectrum
An Overview of Common Benchmarks
Computer
Pattern-Driven Automatic Parallelization
Scientific Programming
Finding representative workloads for computer system design
Finding representative workloads for computer system design
A high performance heterogeneous architecture and its optimization design
HPCC'06 Proceedings of the Second international conference on High Performance Computing and Communications
Characterizing and profiling scientific workflows
Future Generation Computer Systems
Hi-index | 0.00 |
A number of scientific and engineering benchmarks have emerged during the 1980's. Each of these benchmarks has a different origin, methodology and interpretation. This report presents a case study of two current scientific benchmarks and includes a comparison of them based on their instruction mixes as measured by the CRAY X-MP hardware performance monitor (hpm). This particular case study was conducted by graduate students in a Performance Evaluation course taught during Spring Quarter 1991 in the Department of Computer and Information Sciences at the University of Alabama at Birmingham.Students analyzed the dominant loops of the application-based Perfect Benchmarks and noted (where applicable) significant performance comparisons with the loop-based Livermore Fortran Kernels. Whether or not any collection of kernel or loop-based benchmarks can effectively predict the performance of more sophisticated scientific application programs is not clear. This case study does reveal, however, the types of loops which are most prevalent in codes from various scientific applications and what their impact is on the overall performance of these applications.