Polynomial time array dataflow analysis

  • Authors:
  • Robert Seater;David Wonnacott

  • Affiliations:
  • Haverford College, Haverford, PA;Haverford College, Haverford, PA

  • Venue:
  • LCPC'01 Proceedings of the 14th international conference on Languages and compilers for parallel computing
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Array dataflow analysis is a valuable tool for supercomputer compilers. However, the worst-case asymptotic time complexities for modern array dataflow analysis techniques are either not well understood or alarmingly high. For example, the Omega Test uses a subset of the 222O(n) language of Presburger Arithmetic for analysis of affine dependences; its use of uninterpreted function symbols for nonaffine terms introduces additional sources of complexity. Even traditional data dependence analysis of affine dependences is equivalent to integer programming, and is thus NP-complete. These worst-case complexities have raised questions about the wisdom of using array dataflow analysis in a production compiler, despite empirical data that show that various tests run quickly in practice. In this paper, we demonstrate that a polynomial-time algorithm can produce accurate information about the presence of loop-carried array dataflow. We first identify a subdomain of Presburger Arithmetic that can be manipulated (by the Omega Library) in polynomial time; we then describe a modification to prevent exponential blowup of the Omega Library's algorithm for manipulating function symbols. Restricting the Omega Test to these polynomial cases can, in principle, reduce the accuracy of the dataflow information produced. We therefore present the results of our investigation of the effects of these restrictions on the detection of loop-carried array dataflow dependences (which prevent parallelization). These restrictions block parallelization of only a few unimportant loop nests in the approximately 18000 lines of benchmark code we studied. The use of our subdomain of Presburger Arithmetic also gives a modest reduction in analysis time, even with our current unoptimized implementation, as long as we do not employ our modified algorithms for function symbols. The data collected in our empirical studies also suggest directions for improving both accuracy and efficiency.