The Use of Self Checks and Voting in Software Error Detection: An Empirical Study
IEEE Transactions on Software Engineering
Partition Testing Does Not Inspire Confidence (Program Testing)
IEEE Transactions on Software Engineering
Analyzing Partition Testing Strategies
IEEE Transactions on Software Engineering
The infeasibility of experimental quantification of life-critical software reliability
SIGSOFT '91 Proceedings of the conference on Software for citical systems
Estimating the Probability of Failure When Testing Reveals No Failures
IEEE Transactions on Software Engineering
PIE: A Dynamic Failure-Based Technique
IEEE Transactions on Software Engineering
Art of Software Testing
Engineering Software Under Statistical Quality Control
IEEE Software
Are We Testing for True Reliability?
IEEE Software
Self-Checking against Formal Specifications
ICCI '92 Proceedings of the Fourth International Conference on Computing and Information: Computing and Information
Foundations of software testing: dependability theory
SIGSOFT '94 Proceedings of the 2nd ACM SIGSOFT symposium on Foundations of software engineering
Software trustability analysis
ACM Transactions on Software Engineering and Methodology (TOSEM)
Using testability measures for dependability assessment
Proceedings of the 17th international conference on Software engineering
Predicting dependability by testing
ISSTA '96 Proceedings of the 1996 ACM SIGSOFT international symposium on Software testing and analysis
A reliability model combining representative and directed testing
Proceedings of the 18th international conference on Software engineering
On the Use of Testability Measures for Dependability Assessment
IEEE Transactions on Software Engineering
A Framework for Specification-Based Testing
IEEE Transactions on Software Engineering
Testability, fault size and the domain-to-range ratio: An eternal triangle
Proceedings of the 2000 ACM SIGSOFT international symposium on Software testing and analysis
Modeling reliability growth during non-representative
Annals of Software Engineering
Quality assurance and testing for safety systems
Annals of Software Engineering
Software Testability: The New Verification
IEEE Software
Stopping Criteria Comparison: Towards High Quality Behavioral Verification
ISQED '01 Proceedings of the 2nd International Symposium on Quality Electronic Design
Achieving the Quality of Verification for Behavioral Models with Minimum Effort
ISQED '00 Proceedings of the 1st International Symposium on Quality of Electronic Design
"Good enough" software reliability estimation plug-in for Eclipse
eclipse '03 Proceedings of the 2003 OOPSLA workshop on eclipse technology eXchange
Toward a Software Testing and Reliability Early Warning Metric Suite
Proceedings of the 26th International Conference on Software Engineering
Controlling factors in evaluating path-sensitive error detection techniques
Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering
Hi-index | 0.00 |
Most of the effort that goes into improving the quality of software paradoxically does not lead to quantitative, measurable quality. Software developers and quality-assurance organizations spend a great deal of effort preventing, detecting, and removing “defects”—parts of software responsible for operational failure. But software quality can be measured only by statistical parameters like hazard rate and mean time to failure, measures whose connection with defects and with the development process is little understood.At the same time, direct reliability assessment by random testing of software is impractical. The levels we would like to achieve, on the order of 106 - 108 executions without failure, cannot be established in reasonable time. Some limitations of reliability testing can be overcome but the “ultrareliable” region above 108 failure-free executions is likely to remain forever untestable.We propose a new way of looking at the software reliability program. Defect-based efforts should amplify the significance of reliability testing. That is, developers should demonstrate that the actual reliability is better than the measurement. We give an example of a simple reliability-amplification technique, and suggest applications to systematic testing and formal development methods.