Compiler optimization techniques for OpenMP programs

  • Authors:
  • Shigehisa Satoh;Kazuhiro Kusano;Mitsuhisa Sato

  • Affiliations:
  • Tsukuba Res. Ctr., Real World Comp. Partnership, Japan. sh-sato@trc.rwcp.or.jp. Sys. Dev. Lab., Hitachi, Ltd., Japan (3-16-8-402 Fujimi-Cho, Chofu-shi, Tokyo 182-0033, Japan. Tel.: +81 424 41 4058 ...;1st Computers Software Division, NEC Solutions, NEC Corporation, 1-10 Nissin-cho, Fuchu, Tokyo 183-8501, Japan;Center for Computational Physics, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8577, Japan

  • Venue:
  • Scientific Programming
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

We have developed compiler optimization techniques for explicit parallel programs using the OpenMP API. To enable optimization across threads, we designed dataflow analysis techniques in which interactions between threads are effectively modeled. Structured description of parallelism and relaxed memory consistency in OpenMP make the analyses effective and efficient. We developed algorithms for reaching definitions analysis, memory synchronization analysis, and cross-loop data dependence analysis for parallel loops. Our primary target is compiler-directed software distributed shared memory systems in which aggressive compiler optimizations for software-implemented coherence schemes are crucial to obtaining good performance. We also developed optimizations applicable to general OpenMP implementations, namely redundant barrier removal and privatization of dynamically allocated objects. Experimental results for the coherency optimization show that aggressive compiler optimizations are quite effective for a shared-write intensive program because the coherence-induced communication volume in such a program is much larger than that in shared-read intensive programs.