Evaluation of competing software reliability predictions
IEEE Transactions on Software Engineering - Special issue on reliability and safety in real-time process control
Software reliability: measurement, prediction, application
Software reliability: measurement, prediction, application
Software engineering metrics and models
Software engineering metrics and models
An Analysis of Several Software Defect Models
IEEE Transactions on Software Engineering
Recalibrating Software Reliability Models
IEEE Transactions on Software Engineering
Estimating software fault content before coding
ICSE '92 Proceedings of the 14th international conference on Software engineering
Assessing Software Designs Using Capture-Recapture Methods
IEEE Transactions on Software Engineering - Special issue on software reliability
Handbook of software reliability engineering
Handbook of software reliability engineering
Techniques for prediction analysis and recalibration
Handbook of software reliability engineering
Evaluating Testing Methods by Delivered Reliability
IEEE Transactions on Software Engineering
Defect content estimations from review data
Proceedings of the 20th international conference on Software engineering
Software Reliability Engineered Testing
Software Reliability Engineered Testing
Empirical Software Engineering
Predicting Software Reliability
Computer
Quantifying Software Validation: When to Stop Testing?
IEEE Software
Analyzing medium-scale software development
ICSE '78 Proceedings of the 3rd international conference on Software engineering
Software Reliability Growth Models: Assumptions vs. Reality
ISSRE '97 Proceedings of the Eighth International Symposium on Software Reliability Engineering
Quantitative Evaluation of Capture-Recapture Models to Control Software Inspections
ISSRE '97 Proceedings of the Eighth International Symposium on Software Reliability Engineering
A Comparison and Integration of Capture-Recapture Models and the Detection Profile Method
ISSRE '98 Proceedings of the The Ninth International Symposium on Software Reliability Engineering
Exploring Defect Data from Development and Customer Usage on Software Modules over Multiple Releases
ISSRE '98 Proceedings of the The Ninth International Symposium on Software Reliability Engineering
An integrated method for improving testing effectiveness and efficiency
An integrated method for improving testing effectiveness and efficiency
A replicated empirical study of a selection method for software reliability growth models
Empirical Software Engineering
Using Software Reliability Growth Models in Practice
IEEE Software
On modeling software defect repair time
Empirical Software Engineering
Web software traffic characteristics and failure prediction model selection
Journal of Computational Methods in Sciences and Engineering
Mining software defect data to support software testing management
Applied Intelligence
Selection of software reliability model based on BP neural network
ICSI'11 Proceedings of the Second international conference on Advances in swarm intelligence - Volume Part I
Hi-index | 0.00 |
Estimating remaining defects (or failures) in software can help test managers make release decisions during testing. Several methods exist to estimate defect content, among them a variety of software reliability growth models (SRGMs). SRGMs have underlying assumptions that are often violated in practice, but empirical evidence has shown that many are quite robust despite these assumption violations. The problem is that, because of assumption violations, it is often difficult to know which models to apply in practice. We present an empirical method for selecting SRGMs to make release decisions. The method provides guidelines on how to select among the SRGMs to decide on the best model to use as failures are reported during the test phase. The method applies various SRGMs iteratively during system test. They are fitted to weekly cumulative failure data and used to estimate the expected remaining number of failures in software after release. If the SRGMs pass proposed criteria, they may then be used to make release decisions. The method is applied in a case study using defect reports from system testing of three releases of a large medical record system to determine how well it predicts the expected total number of failures.