Software engineering: reliability, development, and management.
Software engineering: reliability, development, and management.
Designing programs that check their work
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
The infeasibility of experimental quantification of life-critical software reliability
SIGSOFT '91 Proceedings of the conference on Software for citical systems
PIE: A Dynamic Failure-Based Technique
IEEE Transactions on Software Engineering
Faults on its sleeve: amplifying software reliability testing
ISSTA '93 Proceedings of the 1993 ACM SIGSOFT international symposium on Software testing and analysis
Using testability measures for dependability assessment
Proceedings of the 17th international conference on Software engineering
Software Testability: The New Verification
IEEE Software
Continuity in software systems
ISSTA '02 Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis
IAT '06 Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology
Testing the limits of emergent behavior in MAS using learning of cooperative behavior
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Software quality assurance economics
Information and Software Technology
Hi-index | 0.00 |
In assessing the quality of software, we would like to make engineering judgements similar to those based on statistical quality control. Ideally, we want to support statements like: "The confidence that this program's result at X is correct is p," where X is a particular vector of inputs, and confidence p is obtained from measurements of the software (perhaps involving X). For the theory to be useful, it must be feasible to predict values of p near 1 for many programs, for most values of X.Blum's theory of self-checking/correcting programs has exactly the right character, but it applies to only a few unusual problems. Conventional software reliability theory is widely applicable, but it yields only confidence in a failure intensity, and the measurements required to support a correctness-like failure intensity (say 10-9/demand) are infeasible. Voas's sensitivity theory remedies these problems of reliability theory, but his model is too simple to be very plausible. In this paper we combine these ideas: reliability, sensitivity, and self-checking, to obtain new results on "dependability," plausible predictions of software quality.