Performance evaluation of supercomputers using HPCC and IMB Benchmarks

  • Authors:
  • Subhash Saini;Robert Ciotti;Brian T. N. Gunney;Thomas E. Spelce;Alice Koniges;Don Dossa;Panagiotis Adamidis;Rolf Rabenseifner;Sunil R. Tiyyagura;Matthias Mueller

  • Affiliations:
  • NASA Advanced Supercomputing, NASA Ames Research Center, Moffett Field, CA 94035, USA;NASA Advanced Supercomputing, NASA Ames Research Center, Moffett Field, CA 94035, USA;Lawrence Livermore National Laboratory, Livermore, CA 94550, USA;Lawrence Livermore National Laboratory, Livermore, CA 94550, USA;Lawrence Livermore National Laboratory, Livermore, CA 94550, USA;Lawrence Livermore National Laboratory, Livermore, CA 94550, USA;German Climate Computing Center, Hamburg, Germany;High-Performance Computing-Center (HLRS), University of Stuttgart, Nobelstr. 19, D-70550 Stuttgart, Germany;High-Performance Computing-Center (HLRS), University of Stuttgart, Nobelstr. 19, D-70550 Stuttgart, Germany;ZIH, TU Dresden, Zellescher Weg 12, D-01069 Dresden, Germany

  • Venue:
  • Journal of Computer and System Sciences
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The HPC Challenge (HPCC) Benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers-SGI Altix BX2, Cray X1, Cray Opteron Cluster, Dell Xeon Cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC Benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks results to study the performance of 11 MPI communication functions on these systems.