The Wisdom of Crowds
The labor economics of paid crowdsourcing
Proceedings of the 11th ACM conference on Electronic commerce
Designing incentives for inexpert human raters
Proceedings of the ACM 2011 conference on Computer supported cooperative work
Turkalytics: analytics for human computation
Proceedings of the 20th international conference on World wide web
Max algorithms in crowdsourcing environments
Proceedings of the 21st international conference on World Wide Web
Question selection for crowd entity resolution
Proceedings of the VLDB Endowment
Hi-index | 0.00 |
We study quality control mechanisms for a crowdsourcing system where workers perform object comparison tasks. We study error masking techniques (e.g., voting) and detection of bad workers. For the latter, we consider using gold-standard questions, as well as disagreement with the plurality answer. We perform experiments on Mechanical Turk that yield insights as to the role of task difficulty in quality control, and the effectiveness of the schemes.