MashMaker: mashups for the masses
Proceedings of the 2007 ACM SIGMOD international conference on Management of data
Crowdsourcing and knowledge sharing: strategic user behavior on taskcn
Proceedings of the 9th ACM conference on Electronic commerce
HPCC '08 Proceedings of the 2008 10th IEEE International Conference on High Performance Computing and Communications
Crowdsourcing and all-pay auctions
Proceedings of the 10th ACM conference on Electronic commerce
Guess who?: enriching the social graph through a crowdsourcing game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Task Design, Motivation, and Participation in Crowdsourcing Contests
International Journal of Electronic Commerce
Organizational uses of the crowd: developing a framework for the study of crowdsourcing
Proceedings of the 50th annual conference on Computers and People Research
Improving Crowdsourcing Efficiency Based on Division Strategy
WI-IAT '12 Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
Form digitization in BPO: from outsourcing to crowdsourcing?
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
An evaluation framework for software crowdsourcing
Frontiers of Computer Science: Selected Publications from Chinese Universities
Hi-index | 0.00 |
Crowdsourcing is a new Web phenomenon, in which a firm takes a function once performed in-house and outsources it to a crowd, usually in the form of an open contest. Designing efficient crowdsourcing mechanisms is not possible without deep understanding of incentives and strategic choices of all participants. This paper presents an empirical analysis of determinants of individual performance in multiple simultaneous crowdsourcing contests using a unique dataset for the world's largest competitive software development portal: TopCoder.com. Special attention is given to studying the effects of the reputation system currently used by TopCoder.com on behavior of contestants. We find that individual specific traits together with the project payment and the number of project requirements are significant predictors of the final project quality. Furthermore, we find significant evidence of strategic behavior of contestants. High rated contestants face tougher competition from their opponents in the competition phase of the contest. In order to soften the competition, they move first in the registration phase of the contest, signing up early for particular projects. Although registration in TopCoder contests is non-binding, it deters entry of opponents in the same contest; our lower bound estimate shows that this strategy generates significant surplus gain to high rated contestants. We conjecture that the reputation + cheap talk mechanism employed by TopCoder has a positive effect on allocative efficiency of simultaneous all-pay contests and should be considered for adoption in other crowdsourcing platforms.