Human Computation
Efficient crowdsourcing contests
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Hi-index | 0.00 |
We propose a quality control mechanism that utilizes workers' self-reported confidences in crowdsourced labeling tasks. Generally, a worker has confidence in the correctness of her answers, and asking about it is useful for estimating the probability of correctness. However, we need to overcome two main obstacles in order to to use confidence for inferring correct answers. First, a worker is not always well-calibrated. Since she is sometimes over/underconfident, her level of confidence does not always accurately reflect the probability of correctness. In addition, she does not always truthfully report her actual confidence. Therefore, we design an indirect mechanism that enables a worker to declare her confidence by choosing a desirable reward plan from the set of plans that correspond to different confidence intervals. Our mechanism ensures that choosing the plan matching the worker's true confidence maximizes her expected utility.