Combining self-reported and automatic data to improve programming effort measurement

  • Authors:
  • Lorin Hochstein;Victor R. Basili;Marvin V. Zelkowitz;Jeffrey K. Hollingsworth;Jeff Carver

  • Affiliations:
  • University of Maryland, College Park, MD;University of Maryland, College Park, MD and Fraunhofer Center, College Park, MD;University of Maryland, College Park, MD and Fraunhofer Center, College Park, MD;University of Maryland, College Park, MD;Mississippi State University, Mississippi State, MS

  • Venue:
  • Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

Measuring effort accurately and consistently across subjects in a programming experiment can be a surprisingly difficult task. In particular, measures based on self-reported data may differ significantly from measures based on data which is recorded automatically from a subject's computing environment. Since self-reports can be unreliable, and not all activities can be captured automatically, a complete measure of programming effort should incorporate both classes of data. In this paper, we show how self-reported and automatic effort can be combined to perform validation and to measure total programming effort.