Performance evaluation of supercomputers using HPCC and IMB benchmarks

  • Authors:
  • Subhash Saini;Robert Ciotti;Brian T. N. Gunney;Thomas E. Spelce;Alice Koniges;Don Dossa;Panagiotis Adamidis;Rolf Rabenseifner;Sunil R. Tiyyagura;Matthias Mueller;Rod Fatoohi

  • Affiliations:
  • NASA Ames Research Center, Moffett Field, California;NASA Ames Research Center, Moffett Field, California;Lawrence Livermore National Laboratory, Livermore, California;Lawrence Livermore National Laboratory, Livermore, California;Lawrence Livermore National Laboratory, Livermore, California;Lawrence Livermore National Laboratory, Livermore, California;High-Performance Computing-Center, Stuttgart, Germany, and University of Stuttgart;High-Performance Computing-Center, Stuttgart, Germany, and University of Stuttgart;High-Performance Computing-Center, Stuttgart, Germany, and University of Stuttgart;ZIH, Dresden, Germany;San Jose State University, San Jose, California

  • Venue:
  • IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray X1, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.