Certifying the reliability of software

  • Authors:
  • P A Currit;M Dyer;H D Mills

  • Affiliations:
  • -;-;-

  • Venue:
  • IEEE Transactions on Software Engineering
  • Year:
  • 1986

Quantified Score

Hi-index 0.05

Visualization

Abstract

The accepted approach to software development is to specify and design a product in response to a requirements analysis and then to test the software selectively with cases perceived to be typical to those requirements. Frequently the result is a product which works well against inputs similar to those tested but which is unreliable in unexpected circumstances. In contrast it is possible to embed the software development and testing process within a formal statistical design. In such a design, software testing can be used to make statistical inferences about the reliability of the future operation of the software. In turn, the process of systematically assessing reliability permits a certification of the product at delivery, that attests to a public record of defect detection and repair and to a measured level of operating reliability. This paper describes a procedure for certifying the reliability of software before its release to users. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (mean time to failure) of the product at the time of its release. The paper discusses the development of certified software products and the derivation of a statistical model used for reliability projection. Available software test data are used to demonstrate the application of the model in the certification process.