Software reliability: measurement, prediction, application
Software reliability: measurement, prediction, application
Handbook of software reliability engineering
Handbook of software reliability engineering
Software metrics (2nd ed.): a rigorous and practical approach
Software metrics (2nd ed.): a rigorous and practical approach
An Empirical Method for Selecting Software Reliability Growth Models
Empirical Software Engineering
Predicting Software Reliability
Computer
Quantifying Software Validation: When to Stop Testing?
IEEE Software
Applying Reliability Measurement: A Case Study
IEEE Software
Determining the Cost of a Stop-Test Decision
IEEE Software
Software Reliability Growth Models: Assumptions vs. Reality
ISSRE '97 Proceedings of the Eighth International Symposium on Software Reliability Engineering
An integrated method for improving testing effectiveness and efficiency
An integrated method for improving testing effectiveness and efficiency
Software Reliability Engineering: More Reliable Software Faster and Cheaper
Software Reliability Engineering: More Reliable Software Faster and Cheaper
Some successful approaches to software reliability modeling in industry
Journal of Systems and Software - Special issue: Automated component-based software engineering
Journal of Systems and Software
Design and Analysis of Experiments
Design and Analysis of Experiments
Replicating software engineering experiments: a poisoned chalice or the Holy Grail
Information and Software Technology
Software development productivity of Japanese enterprise applications
Information Technology and Management
Empirical Software Engineering
Hi-index | 0.00 |
Replications are commonly considered to be important contributions to investigate the generality of empirical studies. By replicating an original study it may be shown that the results are either valid or invalid in another context, outside the specific environment in which the original study was launched. The results of the replicated study show how much confidence we could possibly have in the original study. We present a replication of a method for selecting software reliability growth models to decide whether to stop testing and release software. We applied the selection method in an empirical study, conducted in a different development environment than the original study. The results of the replication study show that with the changed values of stability and curve fit, the selection method works well on the empirical system test data available, i.e., the method was applicable in an environment that was different from the original one. The application of the SRGMs to failures during functional testing resulted in predictions with low relative error, thus providing a useful approach in giving good estimates of the total number of failures to expect during functional testing.