Open issues in MPI implementation

  • Authors:
  • Rajeev Thakur;William Gropp

  • Affiliations:
  • Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL;Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL

  • Venue:
  • ACSAC'07 Proceedings of the 12th Asia-Pacific conference on Advances in Computer Systems Architecture
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

MPI (the Message Passing Interface) continues to be the dominant programming model for parallel machines of all sizes, from small Linux clusters to the largest parallel supercomputers such as IBM Blue Gene/L and Cray XT3. Although the MPI standard was released more than 10 years ago and a number of implementations of MPI are available from both vendors and research groups, MPI implementations still need improvement in many areas. In this paper, we discuss several such areas, including performance, scalability, fault tolerance, support for debugging and verification, topology awareness, collective communication, derived datatypes, and parallel I/O. We also present results from experiments with several MPI implementations (MPICH2, Open MPI, Sun, IBM) on a number of platforms (Linux clusters, Sun and IBM SMPs) that demonstrate the need for performance improvement in one-sided communication and support for multithreaded programs.