Estimating the Probability of Failure When Testing Reveals No Failures
IEEE Transactions on Software Engineering
Validation of ultrahigh dependability for software-based systems
Communications of the ACM
Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables,
Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables,
Software engineering for safety: a roadmap
Proceedings of the Conference on The Future of Software Engineering
Assessment of the Reliability of Fault-Tolerant Software: A Bayesian Approach
SAFECOMP '00 Proceedings of the 19th International Conference on Computer Safety, Reliability and Security
Reliability Assessment of Legacy Safety-Critical Systems Upgraded with Off-the-Shelf Components
SAFECOMP '02 Proceedings of the 21st International Conference on Computer Safety, Reliability and Security
Stopping Criteria Comparison: Towards High Quality Behavioral Verification
ISQED '01 Proceedings of the 2nd International Symposium on Quality Electronic Design
Achieving the Quality of Verification for Behavioral Models with Minimum Effort
ISQED '00 Proceedings of the 1st International Symposium on Quality of Electronic Design
Failure Correlation in Software Reliability Models
ISSRE '99 Proceedings of the 10th International Symposium on Software Reliability Engineering
Journal of Systems and Software - Special issue: Applications of statistics in software engineering
A new approach for software testability analysis
Proceedings of the 28th international conference on Software engineering
IEEE Transactions on Software Engineering
Uncertainty explicit assessment of off-the-shelf software: A Bayesian approach
Information and Software Technology
A direct path to dependable software
Communications of the ACM - A Direct Path to Dependable Software
Defect prediction from static code features: current results, limitations, new approaches
Automated Software Engineering
On the value of learning from defect dense components for software defect prediction
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
Separation of concerns for dependable software design
Proceedings of the FSE/SDP workshop on Future of software engineering research
Software reliability estimation under certainty: generalization of the method of moments
HASE'04 Proceedings of the Eighth IEEE international conference on High assurance systems engineering
Implementation of a testing and diagnostic concept for an NPP reactor protection system
EDCC'05 Proceedings of the 5th European conference on Dependable Computing
Dependable composite web services with components upgraded online
Architecting Dependable Systems III
The formal development of a windows interface
3FACS'98 Proceedings of the 3rd BCS-FACS conference on Northern Formal Methods
Safety demonstration and software development
SAFECOMP'07 Proceedings of the 26th international conference on Computer Safety, Reliability, and Security
Hi-index | 0.00 |
Operational testing, which aims to generate sequences of test cases with the same statistical properties as those that would be experienced in real operational use, can be used to obtain quantitative measures of the reliability of software. In the case of safety critical software it is common to demand that all known faults are removed. This means that if there is a failure during the operational testing, the offending fault must be identified and removed. Thus an operational test for safety critical software takes the form of a specified number of test cases (or a specified period of working) that must be executed failure-free. This paper addresses the problem of specifying the numbers of test cases (or time periods) required for a test, when the previous test has terminated as a result of a failure. It has been proposed that, after the obligatory fix of the offending fault, the software should be treated as if it were completely novel, and be required to pass exactly the same test as originally specified. The reasoning here claims to be conservative, inasmuch as no credit is given for any previous failure-free operation prior to the failure that terminated the test. We show that, in fact, this is not a conservative approach in all cases, and propose instead some new Bayesian stopping rules. We show that the degree of conservatism in stopping rules depends upon the precise way in which the reliability requirement is expressed. We define a particular form of conservatism that seems desirable on intuitive grounds, and show that the stopping rules that exhibit this conservatism are also precisely the ones that seem preferable on other grounds.