Optimizing Collective Communications on SMP Clusters

  • Authors:
  • Kyle Wright

  • Affiliations:
  • Iowa State University

  • Venue:
  • ICPP '05 Proceedings of the 2005 International Conference on Parallel Processing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a generic programming model to design collective communications on SMP clusters. The programming model utilizes shared memory for collective communications and overlapping inter-node/intra-node communications, both of which are normally platform specific approaches. Several collective communications are designed based on this model and tested on three SMP clusters of different configurations. The results show that the developed collective communications can, with proper tuning, provide significant performance improvements over existing generic implementations. For example, when broadcasting an 8MB message our implementations outperform the vendorýs MPI Bcast by 35% on an IBM SP system, 51% on a G4 cluster, and 63% on an Intel cluster, the latter two using MPICHýs MPI Bcast. With all-gather operations using 8MB messages, our implementation outperformthe vendorýs MPI Allgather by 75% on the IBM SP, 60% on the Intel cluster, and 48% on the G4 cluster.