A Review and Evaluation of Software Science
ACM Computing Surveys (CSUR)
Elements of Software Science (Operating and programming systems series)
Elements of Software Science (Operating and programming systems series)
The Elements of Programming Style
The Elements of Programming Style
An experimental investigation of the effect of program structure on program understanding
Proceedings of an ACM conference on Language design for reliable software
Programming factors - language features that help explain programming complexity
ACM '78 Proceedings of the 1978 annual conference - Volume 2
A methodology for studying the psychological complexity of computer programs.
A methodology for studying the psychological complexity of computer programs.
A measure of mental effort related to program clarity.
A measure of mental effort related to program clarity.
On the use of the cyclomatic number to measure program complexity
ACM SIGPLAN Notices
An extension to the cyclomatic measure of program complexity
ACM SIGPLAN Notices
A basis for executing PASCAL programmers
ACM SIGPLAN Notices
Hi-index | 0.00 |
The readability of a computer program has recently attained a high level of interest deriving in part from its expected close relationship with program maintainability; debugging and modification expenses are understood to account for a large proportion of software costs over the life of the software. A computable measure of readability would therefore be useful to program developers during coding and to those assuming responsibility for maintenance of software developed elsewhere. In a series of Algol 68 programs, analyzer generated (machine-computable) and human-judged program factors were examined. The first two present authors found that program length and reasonable practice concerning identifier length were excellent predictors of judgments of readability. These predictors were chosen from a large set of analyzer-generated predictors including software science measures as defined by Halstead and several others; the analyzer-generated predictors were found to replicably estimate a high proportion (41 percent) of variance in readability in new readability judgments. While an estimate of readability based only on analyzer-generated predictors would be clearly useful, human ratings (such as quality of comments, logicality of control flow, and meaningfulness of identifier names) were examined to determine whether they could add significantly to the quality of estimates of readability. The addition of the rating of well structured control flow to the set of analyzer-generated predictors increased the proportion of replicably estimated variance in new readability judgments from 41 to 72 percent.