MPI Collectives on Modern Multicore Clusters: Performance Optimizations and Communication Characteristics

  • Authors:
  • Amith R. Mamidala;Rahul Kumar;Debraj De;D. K. Panda

  • Affiliations:
  • -;-;-;-

  • Venue:
  • CCGRID '08 Proceedings of the 2008 Eighth IEEE International Symposium on Cluster Computing and the Grid
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The advances in multicore technology and modern interconnects is rapidly accelerating the number of cores deployed in today's commodity clusters. A majority of parallel applications written in MPI employ collective operations in their communication kernels. Optimization of these operations on the multicore platforms is the key to obtaining good performance speed-ups. However, designing these operations on the modern multicores is a non-trivial task. Modern multicores such as Intel's Clovertown and AMD's Opteron feature various architectural attributes resulting in interesting ramifications. For example, Clovertown deploys shared L2 caches for a pair of cores whereas in Opteron, L2 caches are exclusive to a core. Understanding the impact of these architectures on communication performance is crucial to designing efficient collective algorithms. In this paper, we systematically evaluate these architectures and use these insights to develop efficient collective operations such as MPI_Bcast, MPI_Allgather, MPI_Allreduce and MPI_Alltoall. Further, we characterize the behavior of these collective algorithms on multicores especially when concurrent network and intra-node communications occur. We also evaluate the benefits of the proposed intra-node MPI_Allreduce over Opteron multicores and compare it with Intel Clovertown systems. The optimizations proposed in this paper reduce the latency of MPI_Bcast and MPI_Allgather by 1.9 and 4.0 times, respectively on 512 cores. For MPI_Allreduce, our optimizations improve the performance by as much as 33\% on the multicores. Further, we observe upto three times improvement in performance for matrix multiplication benchmark on 512 cores.