Characterizing computer performance with a single number
Communications of the ACM
Measuring computer performance: a practitioner's guide
Measuring computer performance: a practitioner's guide
Parallel Computer Architecture: A Hardware/Software Approach
Parallel Computer Architecture: A Hardware/Software Approach
Statistical analysis of NAS parallel benchmarks and LINPACK results
HPCN Europe '95 Proceedings of the International Conference and Exhibition on High-Performance Computing and Networking
War of the benchmark means: time for a truce
ACM SIGARCH Computer Architecture News
How Well Can Simple Metrics Represent the Performance of HPC Applications?
SC '05 Proceedings of the 2005 ACM/IEEE conference on Supercomputing
A genetic algorithms approach to modeling the performance of memory-bound computations
Proceedings of the 2007 ACM/IEEE conference on Supercomputing
Percu: a holistic method for evaluating high performance computing systems
Percu: a holistic method for evaluating high performance computing systems
Hi-index | 0.00 |
A popular U.S. talk show host uses "top 10" lists to critique events and culture every night. Our HPC industry is captivated by another list, the TOP500 list, as a way to track HPC systems' performance based on FLOPS/S assessed by a single, long-lived benchmark-Linpack. The TOP500 list has grown in influence because of its value as a marketing tool. It simplistically, but unrealistically, describes performance of HPC systems. The proponents have advocated for the TOP500 list for different reasons at different times. This paper critiques the Top 10 problems with the TOP500 list and provides suggestions on how to correct those shortcomings. It discusses why the TOP500 list is limiting the impact of HPC systems on real problems and other metrics that may be more meaningful and useful to represent the real effectiveness and value of HPC systems.