Using collaborative filtering to weave an information tapestry
Communications of the ACM - Special issue on information filtering
Item-based collaborative filtering recommendation algorithms
Proceedings of the 10th international conference on World Wide Web
Amazon.com Recommendations: Item-to-Item Collaborative Filtering
IEEE Internet Computing
Computer
A survey of trust and reputation systems for online service provision
Decision Support Systems
Knowledge sharing and yahoo answers: everyone knows something
Proceedings of the 17th international conference on World Wide Web
Gossip-based aggregation of trust in decentralized reputation systems
Autonomous Agents and Multi-Agent Systems
Data quality from crowdsourcing: a study of annotation selection criteria
HLT '09 Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing
Crowdsourcing and all-pay auctions
Proceedings of the 10th ACM conference on Electronic commerce
SERVICES '09 Proceedings of the 2009 Congress on Services - I
Sketching techniques for collaborative filtering
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Approximating power indices: theoretical and empirical analysis
Autonomous Agents and Multi-Agent Systems
Corroborating information from disagreeing views
Proceedings of the third ACM international conference on Web search and data mining
Financial incentives and the "performance of crowds"
ACM SIGKDD Explorations Newsletter
Quality management on Amazon Mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
The Journal of Machine Learning Research
Bayesian knowledge corroboration with logical rules and user feedback
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part II
Crowdsourcing systems on the World-Wide Web
Communications of the ACM
CoBayes: bayesian knowledge corroboration with assessors of unknown areas of expertise
Proceedings of the fourth ACM international conference on Web search and data mining
High-throughput crowdsourcing mechanisms for complex tasks
SocInfo'11 Proceedings of the Third international conference on Social informatics
Crowd IQ: aggregating opinions to boost performance
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Proceedings of the 22nd international conference on World Wide Web
Hi-index | 0.00 |
We measure crowdsourcing performance based on a standard IQ questionnaire, and examine Amazon's Mechanical Turk (AMT) performance under different conditions. These include variations of the payment amount offered, the way incorrect responses affect workers' reputations, threshold reputation scores of participating AMT workers, and the number of workers per task. We show that crowds composed of workers of high reputation achieve higher performance than low reputation crowds, and the effect of the amount of payment is non-monotone---both paying too much and too little affects performance. Furthermore, higher performance is achieved when the task is designed such that incorrect responses can decrease workers' reputation scores. Using majority vote to aggregate multiple responses to the same task can significantly improve performance, which can be further boosted by dynamically allocating workers to tasks in order to break ties.