MPI Support for Multi-core Architectures: Optimized Shared Memory Collectives
Proceedings of the 15th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Optimizing collective communication on multicores
HotPar'09 Proceedings of the First USENIX conference on Hot topics in parallelism
Computer Science - Research and Development
Multi-core code in a cluster – a meaningful option?
GPC'10 Proceedings of the 5th international conference on Advances in Grid and Pervasive Computing
Performance analysis and optimization of MPI collective operations on multi-core clusters
The Journal of Supercomputing
NUMA-aware shared-memory collective communication for MPI
Proceedings of the 22nd international symposium on High-performance parallel and distributed computing
Bandwidth-optimal all-to-all exchanges in fat tree networks
Proceedings of the 27th international ACM conference on International conference on supercomputing
Hi-index | 0.00 |
The advances in multicore technology and modern interconnects is rapidly accelerating the number of cores deployed in today's commodity clusters. A majority of parallel applications written in MPI employ collective operations in their communication kernels. Optimization of these operations on the multicore platforms is the key to obtaining good performance speed-ups. However, designing these operations on the modern multicores is a non-trivial task. Modern multicores such as Intel's Clovertown and AMD's Opteron feature various architectural attributes resulting in interesting ramifications. For example, Clovertown deploys shared L2 caches for a pair of cores whereas in Opteron, L2 caches are exclusive to a core. Understanding the impact of these architectures on communication performance is crucial to designing efficient collective algorithms. In this paper, we systematically evaluate these architectures and use these insights to develop efficient collective operations such as MPI_Bcast, MPI_Allgather, MPI_Allreduce and MPI_Alltoall. Further, we characterize the behavior of these collective algorithms on multicores especially when concurrent network and intra-node communications occur. We also evaluate the benefits of the proposed intra-node MPI_Allreduce over Opteron multicores and compare it with Intel Clovertown systems. The optimizations proposed in this paper reduce the latency of MPI_Bcast and MPI_Allgather by 1.9 and 4.0 times, respectively on 512 cores. For MPI_Allreduce, our optimizations improve the performance by as much as 33\% on the multicores. Further, we observe upto three times improvement in performance for matrix multiplication benchmark on 512 cores.