Design and Evaluation of Nemesis, a Scalable, Low-Latency, Message-Passing Communication Subsystem
CCGRID '06 Proceedings of the Sixth IEEE International Symposium on Cluster Computing and the Grid
CLUSTER '07 Proceedings of the 2007 IEEE International Conference on Cluster Computing
Addressing shared resource contention in multicore processors via scheduling
Proceedings of the fifteenth edition of ASPLOS on Architectural support for programming languages and operating systems
A case for NUMA-aware contention management on multicore systems
Proceedings of the 19th international conference on Parallel architectures and compilation techniques
Proceedings of the international symposium on Memory management
Kernel Assisted Collective Intra-node MPI Communication among Multi-Core and Many-Core CPUs
ICPP '11 Proceedings of the 2011 International Conference on Parallel Processing
Implementation and shared-memory evaluation of MPICH2 over the nemesis communication subsystem
EuroPVM/MPI'06 Proceedings of the 13th European PVM/MPI User's Group conference on Recent advances in parallel virtual machine and message passing interface
Hi-index | 0.00 |
Modern multi-core platforms are evolving very rapidly with 32/64 cores for node. Sharing of system resource can increase communication efficiency between processes on the same node. However, it also increases contention for system resource. Currently, most MPI libraries are developed for systems with relatively small number of cores per node. On the emerging multi-core systems with hundreds of cores per node, existing shared memory mechanisms for MPI run-times will suffer from scalability problem, which may limit the benefits gained from multi-core system. In this paper, we first analyze these problems and then propose a set of new schemes for small message and large message transfer over shared memory. "Shared Tail Cyclic Buffer" scheme is proposed to reduce the number of read and write operations over shared control structures. "State-Driven Polling" scheme is proposed to optimize the message polling through dynamically adjusted polling frequency on different communication pairs. Through dynamic distribution of runtime pinned-down memory, "On-Demand Global Shared Memory Pool" is proposed to bring benefits of pair-wise buffer to large message transfer and optimize shared send buffer utilization without increasing the total shared memory usage. With micro-benchmark evaluation, the new schemes can bring up to 26 % and 70 % improvement for point-to-point latency and bandwidth performance, respectively. For applications, the new schemes can achieve 18 % improvement on the 64-core/node Bulldozer system for Graph500 benchmark, and up to 11 % improvement for NAS benchmarks. With 512 processes evaluation on 32-core Trestles system, the new schemes achieves 16 % improvement for NAS CG benchmark.