An Overview of Common Benchmarks
Computer
The LINPACK benchmark: an explanation
Evaluating supercomputers
SPLASH: Stanford parallel applications for shared-memory
ACM SIGARCH Computer Architecture News
NAS parallel benchmark results
Proceedings of the 1992 ACM/IEEE conference on Supercomputing
General atomic and molecular electronic structure system
Journal of Computational Chemistry
The GENESIS distributed-memory benchmarks
Computer benchmarks
Public international benchmarks for parallel computers: PARKBENCH committee: Report-1
Scientific Programming
Supercomputer performance evaluation and the Perfect Benchmarks
ICS '90 Proceedings of the 4th international conference on Supercomputing
On the Automatic Parallelization of the Perfect Benchmarks®
IEEE Transactions on Parallel and Distributed Systems
Parallel programming with message passing and directives
Computing in Science and Engineering
Quantifying Differences between OpenMP and MPI Using a Large-Scale Application Suite
ISHPC '00 Proceedings of the Third International Symposium on High Performance Computing
SPEC HPC2002: The Next High-Performance Computer Benchmark
ISHPC '02 Proceedings of the 4th International Symposium on High Performance Computing
SPEComp: A New Benchmark Suite for Measuring Parallel Computer Performance
WOMPAT '01 Proceedings of the International Workshop on OpenMP Applications and Tools: OpenMP Shared Memory Parallel Programming
A Community Databank for Performance Tracefiles
Proceedings of the 8th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Hierarchical processors-and-memory architecture for high performance computing
FRONTIERS '96 Proceedings of the 6th Symposium on the Frontiers of Massively Parallel Computation
The Development of Parkbench and Performance Prediction
International Journal of High Performance Computing Applications
Productivity in High Performance Computing
International Journal of High Performance Computing Applications
The Tracefile Testbed: a community repository for identifying and retrieving HPC performance data
International Journal of High Performance Computing and Networking
Application classification through monitoring and learning of resource consumption patterns
IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
Hi-index | 0.00 |
Simulations are only as good as the accuracy of the model. A truism, but one seemingly forgotten at times in the "benchmarking" of computer systems--the judging of their speed and performance running standardized programs. For really useful comparative benchmarking, not only must the conditions of the experiment be controlled, but the premises of the experiment must be valid. That is, the standard benchmark programs must be a reasonably good approximation of applications that real users might want to run on real machines under real working conditions. Too much simplification in the codes, and the results are of limited use in predicting performance.SPEC, the Standard Performance Evaluation Corporation, has been a leader in evaluating the performance of workstations since its formation in the late 1980s. Now a group within SPEC is turning its attention to real high-performance computing, and it's emphasizing the use of realistic scientific and industrial codes as benchmarks. A seismic code for oil exploration and a computational chemistry program are the first two components of the SPEChpc96 suite. SPEC hopes that interest and participation from users will help speed up the somewhat sluggish growth of real HPC performance.