Artificial Intelligence
On the Use of Testability Measures for Dependability Assessment
IEEE Transactions on Software Engineering
An empirical investigation of program spectra
Proceedings of the 1998 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering
A Critique of Software Defect Prediction Models
IEEE Transactions on Software Engineering
Incorporating varying test costs and fault severities into test case prioritization
ICSE '01 Proceedings of the 23rd International Conference on Software Engineering
Visualization of test information to assist fault localization
Proceedings of the 24th International Conference on Software Engineering
Predicting Software Reliability
Computer
Software Testability: The New Verification
IEEE Software
Model-Based Debugging or How to Diagnose Programs Automatically
IEA/AIE '02 Proceedings of the 15th international conference on Industrial and engineering applications of artificial intelligence and expert systems: developments in applied artificial intelligence
SOBER: statistical model-based bug localization
Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering
Search Algorithms for Regression Test Case Prioritization
IEEE Transactions on Software Engineering
Proceedings of the 2007 international symposium on Software testing and analysis
Predicting Defective Software Components from Code Complexity Measures
PRDC '07 Proceedings of the 13th Pacific Rim International Symposium on Dependable Computing
An empirical study of incorporating cost into test suite reduction and prioritization
Proceedings of the 2009 ACM symposium on Applied Computing
Time-aware test-case prioritization using integer linear programming
Proceedings of the eighteenth international symposium on Software testing and analysis
A two-step hierarchical algorithm for model-based diagnosis
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
A practical evaluation of spectrum-based fault localization
Journal of Systems and Software
How Well Do Test Case Prioritization Techniques Support Statistical Fault Localization
COMPSAC '09 Proceedings of the 2009 33rd Annual IEEE International Computer Software and Applications Conference - Volume 01
Evaluating Models for Model-Based Debugging
ASE '08 Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering
A new bayesian approach to multiple intermittent fault diagnosis
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Diagnosing multiple persistent and intermittent faults
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Localizing Software Faults Simultaneously
QSIC '09 Proceedings of the 2009 Ninth International Conference on Quality Software
An analysis of developer metrics for fault prediction
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
Sequential testing algorithms for multiple fault diagnosis
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Hi-index | 0.00 |
During software testing, defect prediction approaches measure current reliability status, forecasting future program failures, and provide information on how many defects need to be removed before shipping. Existing approaches often require faults to be detected and identified as a new one, before a model-based trend can be fitted. While during regression testing failures may frequently occur, it is not evident which are related to new faults. Consequently, reliability growth trending can only be performed in sync with fault identification and repair, which is often performed in between regression test cycles. In this paper we present a dynamic, reasoning approach to estimate the number of defects in the system early in the process of regression testing. Our approach, coined Dracon, is based on Bayesian fault diagnosis over abstractions of program traces (also known as program spectra). Experimental results show that Dracon systematically estimates the exact number of (injected) defects, provided sufficient tests cases are available. Furthermore, we also propose a simple, analytic performance model to assess the influence of failed test cases in the estimation. We observe that our empirical findings are in agreement with the model.