The face of quality in crowdsourcing relevance labels: demographics, personality and labeling accuracy

  • Authors:
  • Gabriella Kazai;Jaap Kamps;Natasa Milic-Frayling

  • Affiliations:
  • Microsoft Research, Cambridge, United Kingdom;University of Amsterdam, Amsterdam, Netherlands;Microsoft Research, Cambridge, United Kingdom

  • Venue:
  • Proceedings of the 21st ACM international conference on Information and knowledge management
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Information retrieval systems require human contributed relevance labels for their training and evaluation. Increasingly such labels are collected under the anonymous, uncontrolled conditions of crowdsourcing, leading to varied output quality. While a range of quality assurance and control techniques have now been developed to reduce noise during or after task completion, little is known about the workers themselves and possible relationships between workers' characteristics and the quality of their work. In this paper, we ask how do the relatively well or poorly-performing crowds, working under specific task conditions, actually look like in terms of worker characteristics, such as demographics or personality traits. Our findings show that the face of a crowd is in fact indicative of the quality of their work.