Experiences surveying the crowd: reflections on methods, participation, and reliability

  • Authors:
  • Catherine C. Marshall;Frank M. Shipman

  • Affiliations:
  • Microsoft Research, Silicon Valley, Mountain View, CA;Texas A&M University, College Station, TX

  • Venue:
  • Proceedings of the 5th Annual ACM Web Science Conference
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Crowdsourcing services such as Amazon's Mechanical Turk (MTurk) provide new venues for recruiting participants and conducting studies; hundreds of surveys may be offered to workers at any given time. We reflect on the results of six related studies we performed on MTurk over a two year period. The studies used a combination of open-ended questions and structured hypothetical statements about story-like scenarios to engage the efforts of 1252 participants. We describe the method used in the studies and reflect on what we have learned about identified best practices. We analyze the aggregated data to profile the types of Turkers who take surveys and examine how the characteristics of the surveys may influence data reliability. The results point to the value of participant engagement, identify potential changes in MTurk as a study venue, and highlight how communication among Turkers influences the data that researchers collect.