High performance RDMA-based MPI implementation over infiniBand

  • Authors:
  • Jiuxing Liu;Jiesheng Wu;Dhabaleswar K. Panda

  • Affiliations:
  • Computer and Information Science, The Ohio State University, Columbus, OH;Computer and Information Science, The Ohio State University, Columbus, OH;Computer and Information Science, The Ohio State University, Columbus, OH

  • Venue:
  • International Journal of Parallel Programming - Special issue I: The 17th annual international conference on supercomputing (ICS'03)
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although InfiniBand Architecture is relatively new in the high performance computing area, it offers many features which help us to improve the performance of communication subsystems. One of these features is Remote Direct Memory Access (RDMA) operations. In this paper, we propose a new design of MPI over lnfiniBand which brings the benefit oF RDMA to not only large messages, but also small and control messages. We also achieve better scalability by exploiting application communication pattern and combining send/receive operations with RDMA operations. Our RDMA-based MPI implementation achieves a latency of 6.8 µsec for small messages and a peak bandwidth of 871 million bytes/sec. Performance evaluation shows that for small messages, our RDMA-based design can reduce the latency by 24%, increase the bandwidth by over 104%, and reduce the host overhead by up to 22%, compared with the original design. For large data transfers, we improve performance by reducing the time for transferring control messages. We have also shown that our new design is beneficial to MPI collective communication and NAS Parallel Benchmarks.