Streaming Algorithms for Biological Sequence Alignment on GPUs
IEEE Transactions on Parallel and Distributed Systems
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
A perfectly matched layer for the absorption of electromagnetic waves
Journal of Computational Physics
HPCTOOLKIT: tools for performance analysis of optimized parallel programs http://hpctoolkit.org
Concurrency and Computation: Practice & Experience - Scalable Tools for High-End Computing
High-performance biocomputing for simulating the spread of contagion over large contact networks
ICCABS '11 Proceedings of the 2011 IEEE 1st International Conference on Computational Advances in Bio and Medical Sciences
Profiling Heterogeneous Multi-GPU Systems to Accelerate Cortically Inspired Learning Algorithms
IPDPS '11 Proceedings of the 2011 IEEE International Parallel & Distributed Processing Symposium
Efficient GPU Implementation for Particle in Cell Algorithm
IPDPS '11 Proceedings of the 2011 IEEE International Parallel & Distributed Processing Symposium
CLUSTER '11 Proceedings of the 2011 IEEE International Conference on Cluster Computing
MPI Alltoall Personalized Exchange on GPGPU Clusters: Design Alternatives and Benefit
CLUSTER '11 Proceedings of the 2011 IEEE International Conference on Cluster Computing
Efficient Intranode Communication in GPU-Accelerated Systems
IPDPSW '12 Proceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum
DMA-Assisted, Intranode Communication in GPU Accelerated Systems
HPCC '12 Proceedings of the 2012 IEEE 14th International Conference on High Performance Computing and Communication & 2012 IEEE 9th International Conference on Embedded Software and Systems
MPI-ACC: An Integrated and Extensible Approach to Data Movement in Accelerator-based Systems
HPCC '12 Proceedings of the 2012 IEEE 14th International Conference on High Performance Computing and Communication & 2012 IEEE 9th International Conference on Embedded Software and Systems
Hi-index | 0.00 |
Scientific computing applications are quickly adapting to leverage the massive parallelism of GPUs in large-scale clusters. However, the current hybrid programming models require application developers to explicitly manage the disjointed host and GPU memories, thus reducing both efficiency and productivity. Consequently, GPU-integrated MPI solutions, such as MPI-ACC and MVAPICH2-GPU, have been developed that provide unified programming interfaces and optimized implementations for end-to-end data communication among CPUs and GPUs. To date, however, there lacks an in-depth performance characterization of the new optimization spaces or the productivity impact of such GPU-integrated communication systems for scientific applications. In this paper, we study the efficacy of GPU-integrated MPI on scientific applications from domains such as epidemiology simulation and seismology modeling, and we discuss the lessons learned. We use MPI-ACC as an example implementation and demonstrate how the programmer can seamlessly choose between either the CPU or the GPU as the logical communication end point, depending on the application's computational requirements. MPI-ACC also encourages programmers to explore novel application-specific optimizations, such as internode CPU-GPU communication with concurrent CPU-GPU computations, which can improve the overall cluster utilization. Furthermore, MPI-ACC internally implements scalable memory management techniques, thereby decoupling the low-level memory optimizations from the applications and making them scalable and portable across several architectures. Experimental results from a state-of-the-art cluster with hundreds of GPUs show that the MPI-ACC--driven new application-specific optimizations can improve the performance of an epidemiology simulation by up to 61.6% and the performance of a seismology modeling application by up to 44%, when compared with traditional hybrid MPI+GPU implementations. We conclude that GPU-integrated MPI significantly enhances programmer productivity and has the potential to improve the performance and portability of scientific applications, thus making a significant step toward GPUs being 'first-class citizens' of hybrid CPU-GPU clusters.