Data flow analysis is model checking of abstract interpretations
POPL '98 Proceedings of the 25th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
SUIF Explorer: an interactive and interprocedural parallelizer
Proceedings of the seventh ACM SIGPLAN symposium on Principles and practice of parallel programming
FALCON: A MATLAB Interactive Restructuring Compiler
LCPC '95 Proceedings of the 8th International Workshop on Languages and Compilers for Parallel Computing
ICSE '81 Proceedings of the 5th international conference on Software engineering
The Java Developer's Guide to Eclipse
The Java Developer's Guide to Eclipse
ACM Transactions on Programming Languages and Systems (TOPLAS)
Optimizing programs with intended semantics
Proceedings of the 24th ACM SIGPLAN conference on Object oriented programming systems languages and applications
Precise identification of problems for structural test generation
Proceedings of the 33rd International Conference on Software Engineering
Proceedings of the 33rd International Conference on Software Engineering
Hi-index | 0.00 |
With programs getting larger and often more complex with each new release, programmers need all the help they can get in understanding and transforming programs. Fortunately, modern development environments, such as Eclipse, incorporate tools for understanding, navigating, and transforming programs. These tools typically use program analyses to extract relevant properties of programs. These tools are often invaluable to developers; for example, many programmers use refactoring tools regularly. However, poor results by the underlying analyses can compromise a tool's usefulness. For example, a bug finding tool may produce too many false positives if the underlying analysis is overly conservative, and thus overwhelm the user with too many possible errors in the program. In such cases it would be invaluable for the tool to explain to the user why it believes that each bug exists. Armed with this knowledge, the user can decide which bugs are worth pursing and which are false positives. The contributions of this paper are as follows: (i) We describe requirements on the structure of an analysis so that we can produce reasons when the analysis fails; the user of the analysis determines whether or not an analysis's results constitute failure. We also describe a simple language that enforces these requirements; (ii) We describe how to produce necessary and sufficient reasons for analysis failure; (iii) We evaluate our system with respect to a number of analyses and programs and find that most reasons are small (and thus usable) and that our system is fast enough for interactive use.