An empirical investigation into the design of auditory cues to enhance computer program comprehension

  • Authors:
  • Andreas Stefik;Christopher Hundhausen;Robert Patterson

  • Affiliations:
  • Southern Illinois University Edwardsville, Department of Computer Science, USA;Washington State University, Human-centered Environments for Learning and Programming (HELP) Lab, School of Electrical Engineering and Computer Science, USA;US Air Force Research Laboratory, USA

  • Venue:
  • International Journal of Human-Computer Studies
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Decades of research have led to notable improvements in the representations used to aid human comprehension of computer programs. Much of this research has focused on visual representations, which leaves open the question of how best to design auditory representations of computer programs. While this question has particular relevance for visually impaired programmers, sighted programmers might also benefit from enhanced auditory representations of their programs. In order to investigate this question empirically, first, we introduce artifact encoding, a novel approach to rigorously measuring the comprehensibility of auditory representations of computer programs. Using this approach as a foundation, we present an experimental study that compared the comprehensibility of two alternative auditory program representations: one with lexical scoping cues that convey the nesting level of program statements, and another without such scoping cues. The results of our first experiment validate both artifact encoding and the scoping cues we used. To see whether auditory cues validated through our paradigm can aid program comprehension in a realistic task scenario, we experimentally compared programmers' ability to debug programs using three alternative environments: (1) an auditory execution environment with our empirically derived auditory cues; (2) an auditory execution environment with the current state-of-the-art auditory cues generated by a screen reader running on top of Microsoft Visual Studio; and (3) a visual version of the execution environment. The results of our second experiment showed that our comprehensible auditory cues are significantly better than the state-of-the-art, affording human performance approaching the effectiveness of visual representations within the statistical margin of error. This research contributes a novel methodology and foundational empirical data that can guide the design of effective auditory representations of computer programs.