Labeling images with a computer game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Get another label? improving data quality and data mining using multiple, noisy labelers
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
An Update on Survey Measures of Web-Oriented Digital Literacy
Social Science Computer Review
Financial incentives and the "performance of crowds"
Proceedings of the ACM SIGKDD Workshop on Human Computation
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Crowdsourcing, attention and productivity
Journal of Information Science
Are your participants gaming the system?: screening mechanical turk workers
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The labor economics of paid crowdsourcing
Proceedings of the 11th ACM conference on Electronic commerce
Strategies for crowdsourcing social data analysis
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Improving the effectiveness of time-based display advertising
Proceedings of the 13th ACM Conference on Electronic Commerce
Peer prediction without a common prior
Proceedings of the 13th ACM Conference on Electronic Commerce
Workflow transparency in a microtask marketplace
Proceedings of the 17th ACM international conference on Supporting group work
LAW VI '12 Proceedings of the Sixth Linguistic Annotation Workshop
Proceedings of the 21st ACM international conference on Information and knowledge management
Modeling rewards and incentive mechanisms for social BPM
BPM'12 Proceedings of the 10th international conference on Business Process Management
Pay by the bit: an information-theoretic metric for collective human judgment
Proceedings of the 2013 conference on Computer supported cooperative work
Co-worker transparency in a microtask marketplace
Proceedings of the 2013 conference on Computer supported cooperative work
Proceedings of the 2013 conference on Computer supported cooperative work
Patterns for visualization evaluation
Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
How to filter out random clickers in a crowdsourcing-based study?
Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
Quality control for comparison microtasks
Proceedings of the First International Workshop on Crowdsourcing and Data Mining
An analysis of human factors and label accuracy in crowdsourcing relevance judgments
Information Retrieval
Incentives and rewarding in social computing
Communications of the ACM
Crowdsourcing performance evaluations of user interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 22nd international conference on World Wide Web
Truthful incentives in crowdsourcing tasks using regret minimization mechanisms
Proceedings of the 22nd international conference on World Wide Web
Keep it simple: reward and task design in crowdsourcing
Proceedings of the Biannual Conference of the Italian Chapter of SIGCHI
Competing or aiming to be average?: normification as a means of engaging digital volunteers
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
A comparison of social, learning, and financial strategies on crowd engagement and output quality
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
No "one-size fits all": towards a principled approach for incentives in mobile crowdsourcing
Proceedings of the 15th Workshop on Mobile Computing Systems and Applications
Motivating participation in online innovation communities
International Journal of Web Based Communities
Contextual keyword extraction by building sentences with crowdsourcing
Multimedia Tools and Applications
Hi-index | 0.02 |
The emergence of online labor markets makes it far easier to use individual human raters to evaluate materials for data collection and analysis in the social sciences. In this paper, we report the results of an experiment - conducted in an online labor market - that measured the effectiveness of a collection of social and financial incentive schemes for motivating workers to conduct a qualitative, content analysis task. Overall, workers performed better than chance, but results varied considerably depending on task difficulty. We find that treatment conditions which asked workers to prospectively think about the responses of their peers - when combined with financial incentives - produced more accurate performance. Other treatments generally had weak effects on quality. Workers in India performed significantly worse than US workers, regardless of treatment group.