MPI and Hybrid Programming Models for Petascale Computing

  • Authors:
  • William D. Gropp

  • Affiliations:
  • Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana

  • Venue:
  • Proceedings of the 15th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In 2011, the National Center for Supercomputing Applications at the University of Illinois will begin operation of the Blue Waters petascale computing system. This system, funded by the National Science Foundation, will deliver a sustained performance of one to two petaflops for many applications in science and engineering.Blue Waters will support a variety of programming models, including the "MPI everywhere" model that is the most common among today's MPI applications. In addition, it will support a variety of other programming models. The programming models may be used instead of MPI or they may be used in combinationwith MPI. Such a combined programming model is often called a hybridmodel. The most familiar of the models used in combination with MPI is OpenMP, which is designed for shared-memory systems and is based on the use of multiple threads in each MPI process. This programming model has found mixed success to date, with many experiments showing little benefit while others show promise. The reason for this is related to the use of OpenMP within MPI programs--where OpenMP is used to complement MPI, for example, by providing better support for load-balancing adaptive computations or sharing large data tables, it can provide a significant benefit. Where it is used as an alternative to MPI, OpenMP often has difficulty achieving the performance of MPI (MPI's much-criticized requirement that the user directly manage data motion ensures that the programmer does in fact manage that memory motion, leading to improved performance). This suggests that other programming models can be productively combined with MPI as long as they complement, rather than replace, MPI.