Infiniband scalability in open MPI

  • Authors:
  • Galen M. Shipman;Tim S. Woodall;Rich L. Graham;Arthur B. Maccabe;Patrick G. Bridges

  • Affiliations:
  • Los Alamos National Laboratory, Advanced Computing Laboratory, Los Alamos, NM and University of New Mexico, Dept. of Computer Science, Albuquerque, NM;Los Alamos National Laboratory, Advanced Computing Laboratory, Los Alamos, NM;Los Alamos National Laboratory, Advanced Computing Laboratory, Los Alamos, NM;University of New Mexico, Dept. of Computer Science, Albuquerque, NM;University of New Mexico, Dept. of Computer Science, Albuquerque, NM

  • Venue:
  • IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Infiniband is becoming an important interconnect technology in high performance computing. Recent efforts in large scale Infiniband deployments are raising scalability questions in the HPC community. Open MPI, a new open source implementation of the MPI standard targeted for production computing, provides several mechanisms to enhance Infiniband scalability. Initial comparisons with MVAPICH, the most widely used Infiniband MPI implementation, show similar performance but with much better scalability characteristics. Specifically, small message latency is improved by up to 10% in medium/large jobs and memory usage per host is reduced by as much as 300%. In addition, Open MPI provides predictable latency that is close to optimal without sacrificing bandwidth performance