Performance Modeling and Tuning Strategies of Mixed Mode Collective Communications
SC '05 Proceedings of the 2005 ACM/IEEE conference on Supercomputing
Optimal broadcast for fully connected processor-node networks
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
We describe a generic programming model to design collective communications on SMP clusters. The programming model utilizes shared memory for collective communications and overlapping inter-node/intra-node communications, both of which are normally platform specific approaches. Several collective communications are designed based on this model and tested on three SMP clusters of different configurations. The results show that the developed collective communications can, with proper tuning, provide significant performance improvements over existing generic implementations. For example, when broadcasting an 8MB message our implementations outperform the vendorýs MPI Bcast by 35% on an IBM SP system, 51% on a G4 cluster, and 63% on an Intel cluster, the latter two using MPICHýs MPI Bcast. With all-gather operations using 8MB messages, our implementation outperformthe vendorýs MPI Allgather by 75% on the IBM SP, 60% on the Intel cluster, and 48% on the G4 cluster.