Software Engineering Economics
Software Engineering Economics
Invariant inference for static checking: an empirical evaluation
ACM SIGSOFT Software Engineering Notes
A Comparison of Bug Finding Tools for Java
ISSRE '04 Proceedings of the 15th International Symposium on Software Reliability Engineering
On the Value of Static Analysis for Fault Detection in Software
IEEE Transactions on Software Engineering
A report on a survey and study of static analysis users
DEFECTS '08 Proceedings of the 2008 workshop on Defects in large software systems
Making defect-finding tools work for you
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 2
Proceedings of the 19th international symposium on Software testing and analysis
Designing useful tools for developers
Proceedings of the 3rd ACM SIGPLAN workshop on Evaluation and usability of programming languages and tools
Novice understanding of program analysis tool notifications
Proceedings of the 2013 International Conference on Software Engineering
Hi-index | 0.00 |
It is difficult to determine the cost effectiveness of program analysis tools because we cannot evaluate them in the same environment where we will be using the tool. Tool evaluations are usually run on mature, stable code after it has passed developer testing. However, program analysis tools are usually run on unstable code, and some tools are meant to run right after compilation. Naturally, the results of the evaluation are not comparable to the true contribution of the tool. This leaves program analysis tool evaluations being very subjective and usually dependent on the evaluators intuition. While we could not solve this problem, we suggest techniques to make the evaluations more objective. We started by making enforcement-based customizations of the tool to be evaluated. When we evaluate a tool, we used a comparative evaluation technique to make the ROI analysis more objective. We also show how to use coverage models to select several tools when they each find different kinds of issues. Finally, we suggest that the tool vendors include features that assist us with a continuous evaluation of the tool as it runs in our software process.