Extending MPI to better support multi-application interaction

  • Authors:
  • Jay Lofstead;Jai Dayal

  • Affiliations:
  • Sandia National Laboratories;Georgia Institute of Technology

  • Venue:
  • EuroMPI'12 Proceedings of the 19th European conference on Recent Advances in the Message Passing Interface
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Current scientific workflows consist of generally several components either integrated in situ or as completely independent, asynchronous components using centralized storage as an interface. Neither of these approaches are likely to scale well into Exascale. Instead, separate applications and services will be launched using online communication to link these components of the scientific discovery process. Our experiences with coupling multiple, independent MPI applications, each with separate processing phases, exposes limitations preventing use of some of the optimized mechanisms within the MPI standard. In this regard, we have identified two shortcomings with current MPI implementations. First, MPI intercommunicators offer a mechanism to communicate across application boundaries, but do not address the impact this operating mode has on possible programming models for each separate application. Second, MPI_Probe offers a way to interleave both local messaging and remote messages, but has limitations as MPI_Bcast and other collective calls are not supported by MPI_Probe thus limiting use of optimize collective calls in this operating mode.