An Empirical Study of Iterative Data-Flow Analysis

  • Authors:
  • Keith D. Cooper;Timothy J. Harvey;Ken Kennedy

  • Affiliations:
  • Rice University, USA;Rice University, USA;Rice University, USA

  • Venue:
  • CIC '06 Proceedings of the 15th International Conference on Computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The iterative algorithm is widely used to solve instances of data-flow analysis problems. The algorithm is attractive because it is easy to implement and robust in its behavior. The theory behind the iterative algorithm establishes a set of conditions where the algorithm runs in at most d(G)+3 passes over the graph — a round-robin algorithm, running a "rapid” framework, on a reducible graph [15]. Fortunately, these restrictions encompass many practical analyses used in code optimization. Even when the rapid restrictions are not met, the iterative algorithm still terminates and produces correct results for a broad class of problems. Given the ubiquity of the iterative algorithm, it is important for compiler writers to understand the performance tradeoffs of different implementations of the algorithm. This paper lays out a number of different data structures to speed up the iterative algorithm using a worklist approach and shows carefully designed experiments using three different iteration-based analyses. Our experience shows not only that the worklist algorithm is significantly faster than the round-robin approach, but that this advantage can be reversed if mistakes are made in the worklist implementation.