Lock-Free Asynchronous Rendezvous Design for MPI Point-to-Point Communication

  • Authors:
  • Rahul Kumar;Amith R. Mamidala;Matthew J. Koop;Gopal Santhanaraman;Dhabaleswar K. Panda

  • Affiliations:
  • Network-Based Computing Laboratory Department of Computer Science and Engineering, The Ohio State University,;Network-Based Computing Laboratory Department of Computer Science and Engineering, The Ohio State University,;Network-Based Computing Laboratory Department of Computer Science and Engineering, The Ohio State University,;Network-Based Computing Laboratory Department of Computer Science and Engineering, The Ohio State University,;Network-Based Computing Laboratory Department of Computer Science and Engineering, The Ohio State University,

  • Venue:
  • Proceedings of the 15th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Message Passing Interface (MPI) is the most commonly used method for programming distributed-memory systems. Most MPI implementations use a rendezvous protocol for transmitting large messages. One of the features desired in a MPI implementation is the ability to asynchronously progress the rendezvous protocol. This is important to provide potential for good computation and communication overlap to applications. There are several designs that have been proposed in previous work to provide asynchronous progress. These designs typically use progress helper threads with support from the network hardware to make progress on the communication. However, most of these designs use locking to protect the shared data structures in the critical communication path. Secondly, multiple interrupts may be necessary to make progress. Further, there is no mechanism to selectively ignore the events generated during communication. In this paper, we propose an enhanced asynchronous rendezvous protocol which overcomes these limitations. Specifically, our design does not require locks in the communication path. In our approach, the main application thread makes progress on the rendezvous transfer with the help of an additional thread. The communication between the two threads occurs via system signals. The new design can achieve near total overlap of communication with computation. Further, our design does not degrade the performance of non-overlapped communication. We have also experimented with different thread scheduling policies of Linux kernel and found out that the round robin policy provides the best performance. With the new design we have been able to achieve 20% reduction in time for a matrix multiplication kernel with MPI+OpenMP paradigm on 256 cores.