Minimum payments that reward honest reputation feedback
EC '06 Proceedings of the 7th ACM conference on Electronic commerce
Eliciting Informative Feedback: The Peer-Prediction Method
Management Science
Incentives for expressing opinions in online polls
Proceedings of the 9th ACM conference on Electronic commerce
WINE '08 Proceedings of the 4th International Workshop on Internet and Network Economics
Collective revelation: a mechanism for self-verified, weighted, and truthful predictions
Proceedings of the 10th ACM conference on Electronic commerce
Mechanisms for making crowds truthful
Journal of Artificial Intelligence Research
On a linear framework for belief dynamics in multi-agent environments
CLIMA VII'06 Proceedings of the 7th international conference on Computational logic in multi-agent systems
Enforcing truthful strategies in incentive compatible reputation mechanisms
WINE'05 Proceedings of the First international conference on Internet and Network Economics
Peer prediction without a common prior
Proceedings of the 13th ACM Conference on Electronic Commerce
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Hi-index | 0.00 |
Crowdsourcing is now widely used to replace judgement or evaluation by an expert authority with an aggregate evaluation from a number of non-experts, in applications ranging from rating and categorizing online content all the way to evaluation of student assignments in massively open online courses (MOOCs) via peer grading. A key issue in these settings, where direct monitoring of both effort and accuracy is infeasible, is incentivizing agents in the 'crowd' to put in effort to make good evaluations, as well as to truthfully report their evaluations. We study the design of mechanisms for crowdsourced judgement elicitation when workers strategically choose both their reports and the effort they put into their evaluations. This leads to a new family of information elicitation problems with unobservable ground truth, where an agent's proficiency--- the probability with which she correctly evaluates the underlying ground truth--- is endogenously determined by her strategic choice of how much effort to put into the task. Our main contribution is a simple, new, mechanism for binary information elicitation for multiple tasks when agents have endogenous proficiencies, with the following properties: (i) Exerting maximum effort followed by truthful reporting of observations is a Nash equilibrium. (ii) This is the equilibrium with maximum payoff to all agents, even when agents have different maximum proficiencies, can use mixed strategies, and can choose a different strategy for each of their tasks. Our information elicitation mechanism requires only minimal bounds on the priors, asks agents to only report their own evaluations, and does not require any conditions on a diverging number of agent reports per task to achieve its incentive properties. The main idea behind our mechanism is to use the presence of multiple tasks and ratings to estimate a reporting statistic to identify and penalize low-effort agreement--- the mechanism rewards agents for agreeing with another 'reference' report on the same task, but also penalizes for blind agreement by subtracting out this statistic term, designed so that agents obtain rewards only when they put in effort into their observations.