Effective communication and computation overlap with hybrid MPI/SMPSs

  • Authors:
  • Vladimir Marjanovic;Jesús Labarta;Eduard Ayguadé;Mateo Valero

  • Affiliations:
  • Barcelona Supercomputing Center (BSC-CNS), Technical University of Catalunya (UPC), Barcelona, Spain;Barcelona Supercomputing Center (BSC-CNS), Technical University of Catalunya (UPC), Barcelona, Spain;Barcelona Supercomputing Center (BSC-CNS), Technical University of Catalunya (UPC), Barcelona, Spain;Barcelona Supercomputing Center (BSC-CNS), Technical University of Catalunya (UPC), Barcelona, Spain

  • Venue:
  • Proceedings of the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Communication overhead is one of the dominant factors affecting performance in high-performance computing systems. To reduce the negative impact of communication, programmers overlap communication and computation by using asynchronous communication primitives. This increases code complexity, requiring more development effort and making less readable programs. This paper presents the hybrid use of MPI and SMPSs (SMP superscalar, a task-based shared-memory programming model) that allows the programmer to easily introduce the asynchrony necessary to overlap communication and computation. We demonstrate the hybrid use of MPI/SMPSs with the high-performance LINPACK benchmark (HPL), and compare it to the pure MPI implementation, which uses the look-ahead technique to overlap communication and computation. The hybrid MPI/SMPSs version significantly improves the performance of the pure MPI version, getting close to the asymptotic performance at medium problem sizes and still getting significant benefits at small/large problem sizes.