A style analysis of C programs
Communications of the ACM - Special section on computer architecture
A note on the Berry-Meekings style metric
Communications of the ACM
Automatic programming assessment
Computers & Education
Using metrics to evaluate student programs
ACM SIGCSE Bulletin
A software system for grading student computer programs
Computers & Education
Rethinking computer science education from a test-first perspective
OOPSLA '03 Companion of the 18th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications
Static analysis of students' Java programs
ACE '04 Proceedings of the Sixth Australasian Conference on Computing Education - Volume 30
Improving student performance by evaluating how well students test their own programs
Journal on Educational Resources in Computing (JERIC)
Hi-index | 0.00 |
Helping students to understand the quality of their programs is a difficult task hampered by the time instructors have for grading. When the number of programs to grade are in the hundreds, instructors may be able to handle dynamic analysis of the programs and possibly a cursory glance at the code itself. Automated solutions may appear attractive, but few exist in the literature. Further, not enough examples exist to help instructors choose what metrics would be useful for helping students to visualize how they program. In this study, a collection of static metrics data obtained with Verilog Logiscope is correlated to an estimate of program quality to determine which metrics would show students at least the instructor's idea of quality. The study results are encouraging and show that definite correlations exist so that static analysis is a viable methodology for assessing student work. Further work is considered to help to confirm the study's results and their practical application.