Software reliability: measurement, prediction, application
Software reliability: measurement, prediction, application
Evaluation of safety-critical software
Communications of the ACM
The infeasibility of experimental quantification of life-critical software reliability
SIGSOFT '91 Proceedings of the conference on Software for citical systems
Estimating the Probability of Failure When Testing Reveals No Failures
IEEE Transactions on Software Engineering
Engineering Software Under Statistical Quality Control
IEEE Software
Testing software using multiple versions
Testing software using multiple versions
On the Use of Testability Measures for Dependability Assessment
IEEE Transactions on Software Engineering
The ability of directed tests to predict software quality
Annals of Software Engineering
Relational programs: An architecture for robust real-time safety-critical process-control systems
Annals of Software Engineering
Software testability measurement for intelligent assertion placement
Software Quality Control
Software Challenges in Aviation Systems
SAFECOMP '02 Proceedings of the 21st International Conference on Computer Safety, Reliability and Security
Defect-Based Reliability Analysis for Mission-Critical Software
COMPSAC '00 24th International Computer Software and Applications Conference
A Study of the Effect of Imperfect Debugging on Software Development Cost
IEEE Transactions on Software Engineering
Mathematical modeling of software reliability testing with imperfect debugging
Computers & Mathematics with Applications
Automated harvesting of test oracles for reliability testing
COMPSAC-W'05 Proceedings of the 29th annual international conference on Computer software and applications conference
Hi-index | 0.00 |
Measurement of software reliability by life testing involves executing the software on large numbers of test cases and recording the results. The number of failures observed is used to bound the failure probability even if the number of failures observed is zero. Typical analyses assume that all failures that occur are observed, but, in practice, failures occur without being observed. In this paper, we examine the effect of imperfect error detection, i.e. the situation in which a failure of the software may not be observed. If a conventional analysis associated with life testing is used, the confidence in the bound on the failure probability is optimistic. Our results show that imperfect error detection does not necessarily limit the ability of life testing to bound the probability of failure to the very low values required in critical systems. However, we show that the confidence level associated with a bound on failure probability cannot necessarily be made as high as desired, unless very strong assumptions are made about the error detection mechanism. Such assumptions are unlikely to be met in practice, and so life testing is likely to be useful only for situations in which very high confidence levels are not required.