Are Found Defects an Indicator of Software Correctness? An Investigation in a Controlled Case Study

  • Authors:
  • Per Runeson;Mans Holmstedt Jonsson;Fredrik Scheja

  • Affiliations:
  • Lund University, Sweden;Lund University, Sweden;Lund University, Sweden

  • Venue:
  • ISSRE '04 Proceedings of the 15th International Symposium on Software Reliability Engineering
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

In quality assurance programs, we want indicators of software quality, especially software correctness. The number of found defects during inspection and testing are often used as the basis for indicators of software correctness. However, there is a paradox in this approach, since the remaining defects is what impacts negatively on software correctness, not the found ones. In order to investigate the validity of using found defects or other product or process metrics as indicators of software correctness, a controlled case study is launched. 57 sets of 10 different programs from the PSP course are assessed using acceptance test suites for each program. In the analysis, the number of defects found during the acceptance test are compared to the number of defects found during development, code size, share of development time spent on testing etc. It is concluded from a correlation analysis that 1) fewer defects remain in larger programs 2) more defects remain when larger share of development effort is spent on testing, and 3) no correlation exist between found defects and correctness. We interpret these observations as 1) the smaller programs do not fulfill the expected requirements 2) that large share effort spent of testing indicates a "hacker" approach to software development, and 3) more research is needed to elaborate this issue.