Prioritizing Test Cases For Regression Testing
IEEE Transactions on Software Engineering
Finding Latent Code Errors via Machine Learning over Program Executions
Proceedings of the 26th International Conference on Software Engineering
OOPSLA '04 Companion to the 19th annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications
Correlation exploitation in error ranking
Proceedings of the 12th ACM SIGSOFT twelfth international symposium on Foundations of software engineering
A Comparison of Bug Finding Tools for Java
ISSRE '04 Proceedings of the 15th International Symposium on Software Reliability Engineering
IEEE Security and Privacy
Mining metrics to predict component failures
Proceedings of the 28th international conference on Software engineering
Checking system rules using system-specific, programmer-written compiler extensions
OSDI'00 Proceedings of the 4th conference on Symposium on Operating System Design & Implementation - Volume 4
Z-ranking: using statistical analysis to counter the impact of static analysis approximations
SAS'03 Proceedings of the 10th international conference on Static analysis
Predicting accurate and actionable static analysis warnings: an experimental approach
Proceedings of the 30th international conference on Software engineering
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
Automatic construction of an effective training set for prioritizing static analysis warnings
Proceedings of the IEEE/ACM international conference on Automated software engineering
Sound non-statistical clustering of static analysis alarms
VMCAI'12 Proceedings of the 13th international conference on Verification, Model Checking, and Abstract Interpretation
Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering
A comparative evaluation of static analysis actionable alert identification techniques
Proceedings of the 9th International Conference on Predictive Models in Software Engineering
Hi-index | 0.00 |
Static analysis tools are useful for finding common programming mistakes that often lead to field failures. However, static analysis tools regularly generate a high number of false positive alerts, requiring manual inspection by the developer to determine if an alert is an indication of a fault. The adaptive ranking model presented in this paper utilizes feedback from developers about inspected alerts in order to rank the remaining alerts by the likelihood that an alert is an indication of a fault. Alerts are ranked based on the homogeneity of populations of generated alerts, historical developer feedback in the form of suppressing false positives and fixing true positive alerts, and historical, application-specific data about the alert ranking factors. The ordering of alerts generated by the adaptive ranking model is compared to a baseline of randomly-, optimally-, and static analysis tool-ordered alerts in a small role-based health care application. The adaptive ranking model provides developers with 81% of true positive alerts after investigating only 20% of the alerts whereas an average of 50 random orderings of the same alerts found only 22% of true positive alerts after investigating 20% of the generated alerts.