Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering
Bayesian reasoning for software testing
Proceedings of the FSE/SDP workshop on Future of software engineering research
Information and Software Technology
Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering
A comparative evaluation of static analysis actionable alert identification techniques
Proceedings of the 9th International Conference on Predictive Models in Software Engineering
Hi-index | 0.00 |
Automated static analysis can identify potential source code anomalies early in the software process that could lead to field failures. However, only a small portion of static analysis alerts may be important to the developer (actionable). The remainder are false positives (unactionable). We propose a process for building false positive mitigation models to classify static analysis alerts as actionable or unactionable using machine learning techniques. For two open source projects, we identify sets of alert characteristics predictive of actionable and unactionable alerts out of 51 candidate characteristics. From these selected characteristics, we evaluate 15 machine learning algorithms, which build models to classify alerts. We were able to obtain 88-97% average accuracy for both projects in classifying alerts using three to 14 alert characteristics. Additionally, the set of selected alert characteristics and best models differed between the two projects, suggesting that false positive mitigation models should be project-specific.