Leveraging MPI's one-sided communication interface for shared-memory programming

  • Authors:
  • Torsten Hoefler;James Dinan;Darius Buntinas;Pavan Balaji;Brian W. Barrett;Ron Brightwell;William Gropp;Vivek Kale;Rajeev Thakur

  • Affiliations:
  • University of Illinois, Urbana, IL, USA, Department of Computer Science, ETH Zurich, Switzerland;Argonne National Laboratory, Argonne, IL;Argonne National Laboratory, Argonne, IL;Argonne National Laboratory, Argonne, IL;Sandia National Laboratories, Albuquerque, NM;Sandia National Laboratories, Albuquerque, NM;University of Illinois, Urbana, IL;University of Illinois, Urbana, IL;Argonne National Laboratory, Argonne, IL

  • Venue:
  • EuroMPI'12 Proceedings of the 19th European conference on Recent Advances in the Message Passing Interface
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Hybrid parallel programming with MPI for internode communication in conjunction with a shared-memory programming model to manage intranode parallelism has become a dominant approach to scalable parallel programming. While this model provides a great deal of flexibility and performance potential, it saddles programmers with the complexity of utilizing two parallel programming systems in the same application. We introduce an MPI-integrated shared-memory programming model that is incorporated into MPI through a small extension to the one-sided communication interface. We discuss the integration of this interface with the upcoming MPI 3.0 one-sided semantics and describe solutions for providing portable and efficient data sharing, atomic operations, and memory consistency. We describe an implementation of the new interface in the MPICH2 and Open MPI implementations and demonstrate an average performance improvement of 40% to the communication component of a five-point stencil solver.