On the difficulty of replicating human subjects studies in software engineering

  • Authors:
  • Jonathan Lung;Jorge Aranda;Steve M. Easterbrook;Gregory V. Wilson

  • Affiliations:
  • University of Toronto, Toronto, ON, Canada;University of Toronto, Toronto, ON, Canada;University of Toronto, Toronto, ON, Canada;University of Toronto, Toronto, ON, Canada

  • Venue:
  • Proceedings of the 30th international conference on Software engineering
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Replications play an important role in verifying empirical results. In this paper, we discuss our experiences performing a literal replication of a human subjects experiment that examined the relationship between a simple test for consistent use of mental models, and success in an introductory programming course. We encountered many difficulties in achieving comparability with the original experiment, due to a series of apparently minor differences in context. Based on this experience, we discuss the relative merits of replication, and suggest that, for some human subjects studies, literal replication may not be the the most effective strategy for validating the results of previous studies.