Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Harnessing the wisdom of crowds in wikipedia: quality through coordination
Proceedings of the 2008 ACM conference on Computer supported cooperative work
Identifying shared leadership in Wikipedia
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
CrowdForge: crowdsourcing complex work
Proceedings of the 24th annual ACM symposium on User interface software and technology
Effectiveness of shared leadership in online communities
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
Shepherding the crowd yields better work
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
DIGITEL '12 Proceedings of the 2012 IEEE Fourth International Conference On Digital Game And Intelligent Toy Enhanced Learning
Building Successful Online Communities: Evidence-Based Social Design
Building Successful Online Communities: Evidence-Based Social Design
MobileWorks: Designing for Quality in a Managed Crowdsourcing Architecture
IEEE Internet Computing
Hi-index | 0.00 |
In modern crowdsourcing markets, requesters face the challenge of training and managing large transient workforces. Requesters can hire peer workers to review others' work, but the value may be marginal, especially if the reviewers lack requisite knowledge. Our research explores if and how workers learn and improve their performance in a task domain by serving as peer reviewers. Further, we investigate whether peer reviewing may be more effective in teams where the reviewers can reach consensus through discussion. An online between-subjects experiment compares the trade-offs of reviewing versus producing work using three different organization strategies: working individually, working as an interactive team, and aggregating individuals into nominal groups. The results show that workers who review others' work perform better on subsequent tasks than workers who just produce. We also find that interactive reviewer teams outperform individual reviewers on all quality measures. However, aggregating individual reviewers into nominal groups produces better quality assessments than interactive teams, except in task domains where discussion helps overcome individual misconceptions.