Communication optimizations for parallel C programs

  • Authors:
  • Yingchun Zhu;Laurie J. Hendren

  • Affiliations:
  • School of Computer Science, McGill University, Montreal, Quebec, Canada H3A 2A7;School of Computer Science, McGill University, Montreal, Quebec, Canada H3A 2A7

  • Venue:
  • PLDI '98 Proceedings of the ACM SIGPLAN 1998 conference on Programming language design and implementation
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents algorithms for reducing the communication overhead for parallel C programs that use dynamically-allocated data structures. The framework consists of an analysis phase called possible-placement analysis, and a transformation phase called communication selection.The fundamental idea of possible-placement analysis is to find all possible points for insertion of remote memory operations. Remote reads are propagated upwards, whereas remote writes are propagated downwards. Based on the results of the possible-placement analysis, the communication selection transformation selects the "best" place for inserting the communication, and determines if pipelining or blocking of communication should be performed.The framework has been implemented in the EARTH-McCAT optimizing/parallelizing C compiler, and experimental results are presented for five pointer-intensive benchmarks running on the EARTH-MANNA distributed-memory parallel architecture. These experiments show that the communication optimization can provide performance improvements of up to 16% over the unoptimized benchmarks.