Reviewing versus doing: learning and performance in crowd assessment

  • Authors:
  • Haiyi Zhu;Steven P. Dow;Robert E. Kraut;Aniket Kittur

  • Affiliations:
  • Carnegie Mellon University, Pittsburgh, PA, USA;Carnegie Mellon University, Pittsburgh, PA, USA;Carnegie Mellon University, Pittsburgh, PA, USA;Carnegie Mellon University, Pittsburgh, PA, USA

  • Venue:
  • Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

In modern crowdsourcing markets, requesters face the challenge of training and managing large transient workforces. Requesters can hire peer workers to review others' work, but the value may be marginal, especially if the reviewers lack requisite knowledge. Our research explores if and how workers learn and improve their performance in a task domain by serving as peer reviewers. Further, we investigate whether peer reviewing may be more effective in teams where the reviewers can reach consensus through discussion. An online between-subjects experiment compares the trade-offs of reviewing versus producing work using three different organization strategies: working individually, working as an interactive team, and aggregating individuals into nominal groups. The results show that workers who review others' work perform better on subsequent tasks than workers who just produce. We also find that interactive reviewer teams outperform individual reviewers on all quality measures. However, aggregating individual reviewers into nominal groups produces better quality assessments than interactive teams, except in task domains where discussion helps overcome individual misconceptions.