Unifying UPC and MPI runtimes: experience with MVAPICH

  • Authors:
  • Jithin Jose;Miao Luo;Sayantan Sur;Dhabaleswar K. Panda

  • Affiliations:
  • The Ohio State University;The Ohio State University;The Ohio State University;The Ohio State University

  • Venue:
  • Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Unified Parallel C (UPC) is an emerging parallel programming language that is based on a shared memory paradigm. MPI has been a widely ported and dominant parallel programming model for the past couple of decades. Real-life scientific applications require a lot of investment by domain scientists. Many scientists choose the MPI programming model as it is considered low-risk. It is unlikely that entire applications will be re-written using the emerging UPC language (or PGAS paradigm) in the near future. It is more likely that parts of these applications will be converted to newer models. This requires that underlying implementation of system software be able to support both UPC and MPI simultaneously. Unfortunately, the current state-of-the-art of UPC and MPI interoperability leaves much to be desired both in terms of performance and ease-of-use. In this paper, we propose "Integrated Native Communication Runtime" (INCR) for MPI and UPC communications on InfiniBand clusters. Our library is capable of supporting both UPC and MPI communications simultaneously. This runtime is based on the widely used MVAPICH (MPI over InfiniBand) Aptus runtime, which is known to scale to tens-of-thousands of cores. Our evaluation reveals that INCR is able to deliver equal or better performance compared to the existing UPC runtime - GASNet on InfiniBand verbs. We observe that with UPC NAS benchmarks CG and MG (class B) at 128 processes, we outperform current GASNet implementation by 10% and 23%, respectively.