What makes a reliable program: few bugs, or a small failure rate?

  • Authors:
  • B. Littlewood

  • Affiliations:
  • The City University, London, England

  • Venue:
  • AFIPS '80 Proceedings of the May 19-22, 1980, national computer conference
  • Year:
  • 1980

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is instructive to look at some of the reasons advanced by software developers for their reluctance to use software reliability measurement tools. Here are a few common ones: (A) "Software reliability models are statistical. Programs are deterministic. If certain input conditions cause a malfunction today, then the same conditions are certain to cause a malfunction if they occur tomorrow. Where is the randomness?" (B) "I am paid to write reliable programs. I use the best programming methodology to achieve this. Software reliability estimation procedures would not help me to improve the reliability of my programs." (C) "We verify our software. When it leaves us it is correct." (D) "I ran your software reliability measurement program on some data from a current project of ours. It said there was an infinite number of bugs left in the program. Who are you trying to kid?" (E) (same manager as in D, but one week later) "We corrected a couple of bugs and ran the reliability measurement program again. This time it said that there were 200 bugs left. Infinity minus two equals two hundred? Is this the new math?" (F) "We put a lot of effort into testing. The selection of test data is a systematic process designed to seek out bugs. Reliability estimation based on such test data would be no guide to the performance of the program in a use environment." (G) "We are writing an air traffic control program. Total system crash would be catastrophic. Other failures range from serious to trivial. Reliability models do'not distinguish between failures of differing severity."