Toward Efficient Support for Multithreaded MPI Communication

  • Authors:
  • Pavan Balaji;Darius Buntinas;David Goodell;William Gropp;Rajeev Thakur

  • Affiliations:
  • Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, USA IL 60439;Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, USA IL 60439;Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, USA IL 60439;Department of Computer Science, University of Illinois, Urbana, USA IL, 61801;Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, USA IL 60439

  • Venue:
  • Proceedings of the 15th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

To make the most effective use of parallel machines that are being built out of increasingly large multicore chips, researchers are exploring the use of programming models comprising a mixture of MPI and threads. Such hybrid models require efficient support from an MPI implementation for MPI messages sent from multiple threads simultaneously. In this paper, we explore the issues involved in designing such an implementation. We present four approaches to building a fully thread-safe MPI implementation, with decreasing levels of critical-section granularity (from coarse-grain locks to fine-grain locks to lock-free operations) and correspondingly increasing levels of complexity. We describe how we have structured our implementation to support all four approaches and enable one to be selected at build time. We present performance results with a message-rate benchmark to demonstrate the performance implications of the different approaches.