High performance RDMA based all-to-all broadcast for infiniband clusters

  • Authors:
  • S. Sur;U. K. R. Bondhugula;A. Mamidala;H. -W. Jin;D. K. Panda

  • Affiliations:
  • Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio;Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio;Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio;Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio;Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio

  • Venue:
  • HiPC'05 Proceedings of the 12th international conference on High Performance Computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The All-to-all broadcast collective operation is essential for many parallel scientific applications. This collective operation is called MPI_Allgather in the context of MPI. Contemporary MPI software stacks implement this collective on top of MPI point-to-point calls leading to several performance overheads. In this paper, we propose a design of All-to-All broadcast using the Remote Direct Memory Access (RDMA) feature offered by InfiniBand, an emerging high performance interconnect. Our RDMA based design eliminates the overheads associated with existing designs. Our results indicate that latency of the All-to-all Broadcast operation can be reduced by 30% for 32 processes and a message size of 32 KB. In addition, our design can improve the latency by a factor of 4.75 under no buffer reuse conditions for the same process count and message size. Further, our design can improve performance of a parallel matrix multiplication algorithm by 37% on eight processes, while multiplying a 256x256 matrix.