Does Code Decay? Assessing the Evidence from Change Management Data

  • Authors:
  • Stephen G. Eick;Todd L. Graves;Alan F. Karr;J. S. Marron;Audris Mockus

  • Affiliations:
  • Bell Labs, Naperville, IL;Los Alamos National Lab, Los Alamos, NM;National Institute of Statistical Sciences, Research Triangle Park, NC;Univ. of North Carolina, Chapel Hill;Bell Labs, Naperville, IL

  • Venue:
  • IEEE Transactions on Software Engineering
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

A central feature of the evolution of large software systems is that change驴which is necessary to add new functionality, accommodate new hardware, and repair faults驴becomes increasingly difficult over time. In this paper, we approach this phenomenon, which we term code decay, scientifically and statistically. We define code decay and propose a number of measurements (code decay indices) on software and on the organizations that produce it, that serve as symptoms, risk factors, and predictors of decay. Using an unusually rich data set (the fifteen-plus year change history of the millions of lines of software for a telephone switching system), we find mixed, but on the whole persuasive, statistical evidence of code decay, which is corroborated by developers of the code. Suggestive indications that perfective maintenance can retard code decay are also discussed.