Performance Comparison of MPI Implementations over InfiniBand, Myrinet and Quadrics

  • Authors:
  • Jiuxing Liu;Balasubramanian Chandrasekaran;Jiesheng Wu;Weihang Jiang;Sushmitha Kini;Weikuan Yu;Darius Buntinas;Peter Wyckoff;D K. Panda

  • Affiliations:
  • The Ohio State University, Columbus;The Ohio State University, Columbus;The Ohio State University, Columbus;The Ohio State University, Columbus;The Ohio State University, Columbus;The Ohio State University, Columbus;The Ohio State University, Columbus;Ohio Supercomputer Center, Columbus, OH;The Ohio State University, Columbus

  • Venue:
  • Proceedings of the 2003 ACM/IEEE conference on Supercomputing
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a comprehensive performance comparison of MPI implementations over Infini-Band, Myrinet and Quadrics. Our performance evaluation consists of two major parts. The first part consists of a set of MPI level micro-benchmarks that characterize different aspects of MPI implementations. The second part of the performance evaluation consists of application level benchmarks. We have used the NAS Parallel Benchmarks and the sweep3D benchmark. We not only present the overall performance results, but also relate application communication characteristics to the information we acquired from the micro-benchmarks. Our results show that the three MPI implementations all have their advantages and disadvantages. For our 8-node cluster, InfiniBand can offer significant performance improvements for a number of applications compared with Myrinet and Quadrics when using the PCI-X bus. Even with just the PCI bus, InfiniBand can still perform better if the applications are bandwidth-bound.