Scaling abstraction refinement via pruning

  • Authors:
  • Percy Liang;Mayur Naik

  • Affiliations:
  • University of California at Berkeley, Berkeley, CA, USA;Intel Labs Berkeley, Berkeley, CA, USA

  • Venue:
  • Proceedings of the 32nd ACM SIGPLAN conference on Programming language design and implementation
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many static analyses do not scale as they are made more precise. For example, increasing the amount of context sensitivity in a k-limited pointer analysis causes the number of contexts to grow exponentially with k. Iterative refinement techniques can mitigate this growth by starting with a coarse abstraction and only refining parts of the abstraction that are deemed relevant with respect to a given client. In this paper, we introduce a new technique called pruning that uses client feedback in a different way. The basic idea is to use coarse abstractions to prune away parts of the program analysis deemed irrelevant for proving a client query, and then using finer abstractions on the sliced program analysis. For a k-limited pointer analysis, this approach amounts to adaptively refining and pruning a set of prefix patterns representing the contexts relevant for the client. By pruning, we are able to scale up to much more expensive abstractions than before. We also prove that the pruned analysis is both sound and complete, that is, it yields the same results as an analysis that uses a more expensive abstraction directly without pruning.