Predicting dependability by testing

  • Authors:
  • Dick Hamlet

  • Affiliations:
  • Portland State University, Department of Computer Science, Center for Software Quality Research, Portland, OR

  • Venue:
  • ISSTA '96 Proceedings of the 1996 ACM SIGSOFT international symposium on Software testing and analysis
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

In assessing the quality of software, we would like to make engineering judgements similar to those based on statistical quality control. Ideally, we want to support statements like: "The confidence that this program's result at X is correct is p," where X is a particular vector of inputs, and confidence p is obtained from measurements of the software (perhaps involving X). For the theory to be useful, it must be feasible to predict values of p near 1 for many programs, for most values of X.Blum's theory of self-checking/correcting programs has exactly the right character, but it applies to only a few unusual problems. Conventional software reliability theory is widely applicable, but it yields only confidence in a failure intensity, and the measurements required to support a correctness-like failure intensity (say 10-9/demand) are infeasible. Voas's sensitivity theory remedies these problems of reliability theory, but his model is too simple to be very plausible. In this paper we combine these ideas: reliability, sensitivity, and self-checking, to obtain new results on "dependability," plausible predictions of software quality.