Software reliability analysis models
IBM Journal of Research and Development
Software reliability: measurement, prediction, application
Software reliability: measurement, prediction, application
IEEE Transactions on Software Engineering
An error complexity model for software reliability measurement
ICSE '89 Proceedings of the 11th international conference on Software engineering
Does imperfect debugging affect software reliability growth?
ICSE '89 Proceedings of the 11th international conference on Software engineering
Rationale for fault exposure ratio K
ACM SIGSOFT Software Engineering Notes
Program structure and dynamic models of software reliability: investigation in a simulation environment
An empirical study of a model for program error prediction
ICSE '85 Proceedings of the 8th international conference on Software engineering
Software Reliability Models: Developments, Evaluation and Applications
Software Reliability Models: Developments, Evaluation and Applications
Using Neural Networks in Reliability Prediction
IEEE Software
Software unit test coverage and adequacy
ACM Computing Surveys (CSUR)
Fault exposure ratio estimation and applications
ISSRE '96 Proceedings of the The Seventh International Symposium on Software Reliability Engineering
What do the Software Reliability Growth Model Parameters Represent?
ISSRE '97 Proceedings of the Eighth International Symposium on Software Reliability Engineering
Requirements Volatility and Defect Density
ISSRE '99 Proceedings of the 10th International Symposium on Software Reliability Engineering
Multi-agent-based integrated framework for intra-class testing of object-oriented software
Applied Soft Computing
Expert Systems with Applications: An International Journal
Journal of Electronic Testing: Theory and Applications
Antirandom Test Vectors for BIST in Hardware/Software Systems
Fundamenta Informaticae
Hi-index | 0.00 |
The fault exposure ratio, K, is an important factor that controls the per-fault hazard rate, and hence, the effectiveness of the testing of software. The authors examine the variations of K with fault density, which declines with testing time. Because faults become harder to find, K should decline if testing is strictly random. However, it is shown that at lower fault densities K tends to increase. This is explained using the hypothesis that real testing is more efficient than strictly random testing especially at the end of the test phase. Data sets from several different projects (in USA and Japan) are analyzed. When the two factors, e.g., shift in the detectability profile and the nonrandomness of testing, are combined the analysis leads to the logarithmic model that is known to have superior predictive capability.