Slash(dot) and burn: distributed moderation in a large online conversation space
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Crowdsourcing graphical perception: using mechanical turk to assess visualization design
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
OCSC'07 Proceedings of the 2nd international conference on Online communities and social computing
Sellers' problems in human computation markets
Proceedings of the ACM SIGKDD Workshop on Human Computation
Quality management on Amazon Mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
Exploring iterative and parallel human computation processes
Proceedings of the ACM SIGKDD Workshop on Human Computation
Soylent: a word processor with a crowd inside
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
VizWiz: nearly real-time answers to visual questions
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Analyzing the Amazon Mechanical Turk marketplace
XRDS: Crossroads, The ACM Magazine for Students - Comp-YOU-Ter
The impact of social information on visual judgments
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
CrowdScape: interactively visualizing user behavior and output
Proceedings of the 25th annual ACM symposium on User interface software and technology
Enhancing reliability using peer consistency evaluation in human computation
Proceedings of the 2013 conference on Computer supported cooperative work
Crowd vs. crowd: large-scale cooperative design through open team competition
Proceedings of the 2013 conference on Computer supported cooperative work
EmailValet: managing email overload through private, accountable crowdsourcing
Proceedings of the 2013 conference on Computer supported cooperative work
Proceedings of the 2013 conference on Computer supported cooperative work
Experiences surveying the crowd: reflections on methods, participation, and reliability
Proceedings of the 5th Annual ACM Web Science Conference
A pilot study of using crowds in the classroom
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Real-time trip planning with the crowd
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Peer and self assessment in massive online classes
ACM Transactions on Computer-Human Interaction (TOCHI)
TaskGenies: Automatically Providing Action Plans Helps People Complete Tasks
ACM Transactions on Computer-Human Interaction (TOCHI)
Voyant: generating structured feedback on visual designs using a crowd of non-experts
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
VidWiki: enabling the crowd to improve the legibility of online educational videos
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Reviewing versus doing: learning and performance in crowd assessment
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Sharing Knowledge and Expertise: The CSCW View of Knowledge Management
Computer Supported Cooperative Work
Hi-index | 0.00 |
Micro-task platforms provide massively parallel, on-demand labor. However, it can be difficult to reliably achieve high-quality work because online workers may behave irresponsibly, misunderstand the task, or lack necessary skills. This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results. We investigate this question through Shepherd, a feedback system for crowdsourced work. In a between-subjects study with three conditions, crowd workers wrote consumer reviews for six products they own. Participants in the None condition received no immediate feedback, consistent with most current crowdsourcing practices. Participants in the Self-assessment condition judged their own work. Participants in the External assessment condition received expert feedback. Self-assessment alone yielded better overall work than the None condition and helped workers improve over time. External assessment also yielded these benefits. Participants who received external assessment also revised their work more. We conclude by discussing interaction and infrastructure approaches for integrating real-time assessment into online work.