On the efficacy of GPU-integrated MPI for scientific applications

  • Authors:
  • Ashwin M. Aji;Lokendra S. Panwar;Feng Ji;Milind Chabbi;Karthik Murthy;Pavan Balaji;Keith R. Bisset;James Dinan;Wu-chun Feng;John Mellor-Crummey;Xiaosong Ma;Rajeev Thakur

  • Affiliations:
  • Virginia Tech, Blacksburg, VA, USA;Virginia Tech, Blacksburg, VA, USA;North Carolina State University, Raleigh, NC, USA;Rice University, Houston, TX, USA;Rice University, Houston, TX, USA;Argonne National Lab., Chicago, IL, USA;Virginia Bioinformatics Institute, Blacksburg, VA, USA;Argonne National Lab., Chicago, IL, USA;Virginia Tech, Blacksburg, VA, USA;Rice University, Houston, TX, USA;North Carolina State University, Raleigh, NC, USA;Argonne National Lab., Chicago, IL, USA

  • Venue:
  • Proceedings of the 22nd international symposium on High-performance parallel and distributed computing
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Scientific computing applications are quickly adapting to leverage the massive parallelism of GPUs in large-scale clusters. However, the current hybrid programming models require application developers to explicitly manage the disjointed host and GPU memories, thus reducing both efficiency and productivity. Consequently, GPU-integrated MPI solutions, such as MPI-ACC and MVAPICH2-GPU, have been developed that provide unified programming interfaces and optimized implementations for end-to-end data communication among CPUs and GPUs. To date, however, there lacks an in-depth performance characterization of the new optimization spaces or the productivity impact of such GPU-integrated communication systems for scientific applications. In this paper, we study the efficacy of GPU-integrated MPI on scientific applications from domains such as epidemiology simulation and seismology modeling, and we discuss the lessons learned. We use MPI-ACC as an example implementation and demonstrate how the programmer can seamlessly choose between either the CPU or the GPU as the logical communication end point, depending on the application's computational requirements. MPI-ACC also encourages programmers to explore novel application-specific optimizations, such as internode CPU-GPU communication with concurrent CPU-GPU computations, which can improve the overall cluster utilization. Furthermore, MPI-ACC internally implements scalable memory management techniques, thereby decoupling the low-level memory optimizations from the applications and making them scalable and portable across several architectures. Experimental results from a state-of-the-art cluster with hundreds of GPUs show that the MPI-ACC--driven new application-specific optimizations can improve the performance of an epidemiology simulation by up to 61.6% and the performance of a seismology modeling application by up to 44%, when compared with traditional hybrid MPI+GPU implementations. We conclude that GPU-integrated MPI significantly enhances programmer productivity and has the potential to improve the performance and portability of scientific applications, thus making a significant step toward GPUs being 'first-class citizens' of hybrid CPU-GPU clusters.