First Class Communication in MPI

  • Authors:
  • Erik Demaine

  • Affiliations:
  • -

  • Venue:
  • MPIDC '96 Proceedings of the Second MPI Developers Conference
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we compare three concurrent-programming languages based on message-passing, Concurrent ML (CML), Occam, and MPI. The main advantage of the CML extension of Standard ML (SML) is that communication events are first-class just like normal program variables (e.g., integers), that is, they can be created at run-time, assigned to variables, and passed to and returned from functions. In addition, it provides dynamic process and channel creation. Occam, first designed for Transputers, is based on a static model of process and channel creation. We examine how these limitations enforce severe restrictions on communication events, and how they affect the flexibility of Occam programs. The MPI (Message Passing Interface) standard provides a common way to access message-passing in C and Fortran. Although MPI was designed for parallel and distributed computation, it can also be viewed as a general concurrent-programming language. In particular, most Occam features and several important facilities of CML can be implemented in MPI. For example, MPI-2 supports dynamic process and channel creation, and less general first-class communication events. We propose an extension to MPI which provides the CML choose, wrap, and guard combinators. This would make MPI a strong base for the flexible concurrency available in CML. Assuming that the modifications are incorporated into the standard and its implementations, higher-order concurrency and its advantages will become more widespread.