Noncollective communicator creation in MPI

  • Authors:
  • James Dinan;Sriram Krishnamoorthy;Pavan Balaji;Jeff R. Hammond;Manojkumar Krishnan;Vinod Tipparaju;Abhinav Vishnu

  • Affiliations:
  • Argonne National Laboratory, Argonne, Illinois;Pacific Northwest National Laboratory, Richland, Washington;Argonne National Laboratory, Argonne, Illinois;Argonne National Laboratory, Argonne, Illinois;Pacific Northwest National Laboratory, Richland, Washington;Oak Ridge National Laboratory, Oak Ridge, Tennessee;Pacific Northwest National Laboratory, Richland, Washington

  • Venue:
  • EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

MPI communicators abstract communication operations across application modules, facilitating seamless composition of different libraries. In addition, communicators provide the ability to form groups of processes and establish multiple levels of parallelism. Traditionally, communicators have been collectively created in the context of the parent communicator. The recent thrust toward systems at petascale and beyond has brought forth new application use cases, including fault tolerance and load balancing, that highlight the ability to construct an MPI communicator in the context of its new process group as a key capability. However, it has long been believed that MPI is not capable of allowing the user to form a new communicator in this way. We present a new algorithm that allows the user to create such flexible process groups using only the functionality given in the current MPI standard. We explore performance implications of this technique and demonstrate its utility for load balancing in the context of a Markov chain Monte Carlo computation. In comparison with a traditional collective approach, noncollective communicator creation enables a 30% improvement in execution time through asynchronous load balancing.