A metric for software readability

  • Authors:
  • Raymond P.L. Buse;Westley R. Weimer

  • Affiliations:
  • University of Virginia, Charlottesville, VA, USA;University of Virginia, Charlottesville, VA, USA

  • Venue:
  • ISSTA '08 Proceedings of the 2008 international symposium on Software testing and analysis
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we explore the concept of code readability and investigate its relation to software quality. With data collected from human annotators, we derive associations between a simple set of local code features and human notions of readability. Using those features, we construct an automated readability measure and show that it can be 80% effective, and better than a human on average, at predicting readability judgments. Furthermore, we show that this metric correlates strongly with two traditional measures of software quality, code changes and defect reports. Finally, we discuss the implications of this study on programming language design and engineering practice. For example, our data suggests that comments, in of themselves, are less important than simple blank lines to local judgments of readability.