High performance MPI-2 one-sided communication over InfiniBand

  • Authors:
  • Weihang Jiang;Jiuxing Liu;Hyun-Wook Jin;D. K. Panda;W. Gropp;R. Thakur

  • Affiliations:
  • Comput. & Inf. Sci., Ohio State Univ., Columbus, OH, USA;Comput. & Inf. Sci., Ohio State Univ., Columbus, OH, USA;Comput. & Inf. Sci., Ohio State Univ., Columbus, OH, USA;Comput. & Inf. Sci., Ohio State Univ., Columbus, OH, USA;Dept. of Biomed. Informatics, Ohio State Univ., Columbus, OH, USA;Dept. of Comput. Sci., Indiana Univ., Bloomington, IN, USA

  • Venue:
  • CCGRID '04 Proceedings of the 2004 IEEE International Symposium on Cluster Computing and the Grid
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many existing MPI-2 one-sided communication implementations are built on top of MPI send/receive operations. Although this approach can achieve good portability, it suffers front high communication overhead and dependency on remote process for communication progress. To address these problems, we propose a high performance MPI-2 one-sided communication design over the InfiniBand Architecture. In our design, MPI-2 one-sided communication operations such as MPI-Put, MPI-Get and MPI-Accumulate are directly mapped to InfiniBand Remote Direct Memory Access (RDMA) operations. Our design has been implemented based on MPICH2 over InfiniBand. We present detailed design issues for this approach and perform a set of microbenchmarks to characterize different aspects of its performance. Our performance evaluation shows that compared with the design based on MPI send/receive, our design can improve throughput up to 77%, and reduce latency and synchronization overhead up to 19% and 13%, respectively. Under certain process skew, the bad impact can be significantly reduced by new design, from 41% to nearly 0%. It also can achieve better overlap of communication and computation.