Explaining failures of program analyses

  • Authors:
  • Daniel von Dincklage;Amer Diwan

  • Affiliations:
  • University of Colorado at Boulder, Boulder, CO, USA;University of Colorado at Boulder, Boulder, CO, USA

  • Venue:
  • Proceedings of the 2008 ACM SIGPLAN conference on Programming language design and implementation
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

With programs getting larger and often more complex with each new release, programmers need all the help they can get in understanding and transforming programs. Fortunately, modern development environments, such as Eclipse, incorporate tools for understanding, navigating, and transforming programs. These tools typically use program analyses to extract relevant properties of programs. These tools are often invaluable to developers; for example, many programmers use refactoring tools regularly. However, poor results by the underlying analyses can compromise a tool's usefulness. For example, a bug finding tool may produce too many false positives if the underlying analysis is overly conservative, and thus overwhelm the user with too many possible errors in the program. In such cases it would be invaluable for the tool to explain to the user why it believes that each bug exists. Armed with this knowledge, the user can decide which bugs are worth pursing and which are false positives. The contributions of this paper are as follows: (i) We describe requirements on the structure of an analysis so that we can produce reasons when the analysis fails; the user of the analysis determines whether or not an analysis's results constitute failure. We also describe a simple language that enforces these requirements; (ii) We describe how to produce necessary and sufficient reasons for analysis failure; (iii) We evaluate our system with respect to a number of analyses and programs and find that most reasons are small (and thus usable) and that our system is fast enough for interactive use.