Eliciting Informative Feedback: The Peer-Prediction Method
Management Science
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Combining human and machine intelligence in large-scale crowdsourcing
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Characterizing and aggregating agent estimates
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Aggregating crowdsourced binary ratings
Proceedings of the 22nd international conference on World Wide Web
Hi-index | 0.00 |
A challenge with the programmatic access of human talent via crowdsourcing platforms is the specification of incentives and the checking of the quality of contributions. Methodologies for checking quality include providing a payment if the work is approved by the task owner and hiring additional workers to evaluate contributors' work. Both of these approaches place a burden on people and on the organizations commissioning tasks, and may be susceptible to manipulation by workers and task owners. Moreover, neither a task owner nor the task market may know the task well enough to be able to evaluate worker reports. Methodologies for incentivizing workers without external quality checking include rewards based on agreement with a peer worker or with the final output of the system. These approaches are vulnerable to strategic manipulations by workers. Recent experiments on Mechanical Turk have demonstrated the negative influence of manipulations by workers and task owners on crowdsourcing systems [3]. We address this central challenge by introducing incentive mechanisms that promote truthful reporting in crowdsourcing and discourage manipulation by workers and task owners without introducing additional overhead.