Questions in, knowledge in?: a study of naver's question answering community
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Designing incentives for online question and answer forums
Proceedings of the 10th ACM conference on Electronic commerce
Virtual gifts and guanxi: supporting social exchange in a chinese online community
Proceedings of the ACM 2011 conference on Computer supported cooperative work
Optimal crowdsourcing contests
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
Implementing optimal outcomes in social computing: a game-theoretic approach
Proceedings of the 21st international conference on World Wide Web
Crowdsourcing with endogenous entry
Proceedings of the 21st international conference on World Wide Web
Implementing optimal outcomes in social computing: a game-theoretic approach
Proceedings of the 21st international conference on World Wide Web
Learning and incentives in user-generated content: multi-armed bandits with endogenous arms
Proceedings of the 4th conference on Innovations in Theoretical Computer Science
Incentives, gamification, and game theory: an economic approach to badge design
Proceedings of the fourteenth ACM conference on Electronic commerce
Incentivizing participation in online forums for education
Proceedings of the fourteenth ACM conference on Electronic commerce
Social computing and user-generated content: a game-theoretic approach
ACM SIGecom Exchanges
Improving Wiki Article Quality Through Crowd Coordination: A Resource Allocation Approach
International Journal on Semantic Web & Information Systems
Hi-index | 0.00 |
In many social computing applications such as online Q&A forums, the best contribution for each task receives some high reward, while all remaining contributions receive an identical, lower reward irrespective of their actual qualities. Suppose a mechanism designer (site owner) wishes to optimize an objective that is some function of the number and qualities of received contributions. When potential contributors are {\em strategic} agents, who decide whether to contribute or not to selfishly maximize their own utilities, is such a "best contribution" mechanism, Mb, adequate to implement an outcome that is optimal for the mechanism designer? We first show that in settings where a contribution's value is determined primarily by an agent's expertise, and agents only strategically choose whether to contribute or not, contests can implement optimal outcomes: for any reasonable objective, the rewards for the best and remaining contributions in Mb can always be chosen so that the outcome in the unique symmetric equilibrium of Mb maximizes the mechanism designer's utility. We also show how the mechanism designer can learn these optimal rewards when she does not know the parameters of the agents' utilities, as might be the case in practice. We next consider settings where a contribution's value depends on both the contributor's expertise as well as her effort, and agents endogenously choose how much effort to exert in addition to deciding whether to contribute. Here, we show that optimal outcomes can never be implemented by contests if the system can rank the qualities of contributions perfectly. However, if there is noise in the contributions' rankings, then the mechanism designer can again induce agents to follow strategies that maximize his utility. Thus imperfect rankings can actually help achieve implementability of optimal outcomes when effort is endogenous and influences quality.