Analyzer-generated and human-judged predictors of computer program readability

  • Authors:
  • Gerrit E. DeYoung;Garry R. Kampen;James M. Topolski

  • Affiliations:
  • -;-;-

  • Venue:
  • CHI '82 Proceedings of the 1982 Conference on Human Factors in Computing Systems
  • Year:
  • 1982

Quantified Score

Hi-index 0.00

Visualization

Abstract

The readability of a computer program has recently attained a high level of interest deriving in part from its expected close relationship with program maintainability; debugging and modification expenses are understood to account for a large proportion of software costs over the life of the software. A computable measure of readability would therefore be useful to program developers during coding and to those assuming responsibility for maintenance of software developed elsewhere. In a series of Algol 68 programs, analyzer generated (machine-computable) and human-judged program factors were examined. The first two present authors found that program length and reasonable practice concerning identifier length were excellent predictors of judgments of readability. These predictors were chosen from a large set of analyzer-generated predictors including software science measures as defined by Halstead and several others; the analyzer-generated predictors were found to replicably estimate a high proportion (41 percent) of variance in readability in new readability judgments. While an estimate of readability based only on analyzer-generated predictors would be clearly useful, human ratings (such as quality of comments, logicality of control flow, and meaningfulness of identifier names) were examined to determine whether they could add significantly to the quality of estimates of readability. The addition of the rating of well structured control flow to the set of analyzer-generated predictors increased the proportion of replicably estimated variance in new readability judgments from 41 to 72 percent.