An Examination of Fault Exposure Ratio

  • Authors:
  • Yashwant K. Malaiya;Anneliese von Mayrhauser;Pradip K. Srimani

  • Affiliations:
  • -;-;-

  • Venue:
  • IEEE Transactions on Software Engineering - Special issue on software reliability
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

The fault exposure ratio, K, is an important factor that controls the per-fault hazard rate, and hence, the effectiveness of the testing of software. The authors examine the variations of K with fault density, which declines with testing time. Because faults become harder to find, K should decline if testing is strictly random. However, it is shown that at lower fault densities K tends to increase. This is explained using the hypothesis that real testing is more efficient than strictly random testing especially at the end of the test phase. Data sets from several different projects (in USA and Japan) are analyzed. When the two factors, e.g., shift in the detectability profile and the nonrandomness of testing, are combined the analysis leads to the logarithmic model that is known to have superior predictive capability.