Determining the Distribution of Maintenance Categories: Survey versus Measurement

  • Authors:
  • Stephen R. Schach;Bo Jin;Liguo Yu;Gillian Z. Heller;Jeff Offutt

  • Affiliations:
  • Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA srs@vuse.vanderbilt.edu;Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA bo.jin@vanderbilt.edu;Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA liguo.yu@vanderbilt.edu;Department of Statistics, Macquarie University, Sydney, NSW 2109, Australia gheller@efs.mq.edu.au;Department of Information and Software Engineering, George Mason University, Fairfax, VA 22030, USA ofut@ise.gmu.edu

  • Venue:
  • Empirical Software Engineering
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In 1978, Lientz, Swanson, and Tompkins published the results of a survey on software maintenance. They found that 17.4% of maintenance effort was categorized as corrective in nature, 18.2% as adaptive, 60.3% as perfective, and 4.1% was categorized as other. We refer to this as the “LST” result. We contrast this survey-based result with our empirical results from the analysis of data for the repeated maintenance of three software products: a commercial real-time product, the Linux kernel, and GCC. For all three products and at both levels of granularity we considered, our observed distributions of maintenance categories were statistically very highly significantly different from LST. In particular, corrective maintenance was always more than twice the LST value. For the summed data, the percentage of corrective maintenance was more than three times the LST value. We suggest various explanations for the observed differences, including inaccuracies on the part of the maintenance managers who responded to the LST survey.