Improving communication in PGAS environments: static and dynamic coalescing in UPC

  • Authors:
  • Michail Alvanos;Montse Farreras;Ettore Tiotto;José Nelson Amaral;Xavier Martorell

  • Affiliations:
  • Barcelona Supercomputer Center, Barcelona, Spain;Universitat Politècnica de Catalunya, BARCELONA, Spain;IBM Toronto Laboratory, TORONTO, Canada;University of Alberta, Edmonton, Canada;Universitat Politècnica de Catalunya, BARCELONA, Canada

  • Venue:
  • Proceedings of the 27th international ACM conference on International conference on supercomputing
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

The goal of Partitioned Global Address Space (PGAS) languages is to improve programmer productivity in large scale parallel machines. However, PGAS programs may have many fine-grained shared accesses that lead to performance degradation. Manual code transformations or compiler optimizations are required to improve the performance of programs with fine-grained accesses. The downside of manual code transformations is the increased program complexity that hinders programmer productivity. On the other hand, most compiler optimizations of fine-grain accesses require knowledge of physical data mapping and the use of parallel loop constructs. This paper presents an optimization for the Unified Parallel C language that combines compile time (static) and runtime (dynamic) coalescing of shared data, without the knowledge of physical data mapping. Larger messages increase the network efficiency and static coalescing decreases the overhead of library calls. The performance evaluation uses two microbenchmarks and three benchmarks to obtain scaling and absolute performance numbers on up to 32768 cores of a Power 775 machine. Our results show that the compiler transformation results in speedups from 1.15X up to 21X compared with the baseline versions and that they achieve up to 63% the performance of the MPI versions.