BATC: a benchmark for aggregation techniques in crowdsourcing

  • Authors:
  • Quoc Viet Hung Nguyen;Thanh Tam Nguyen;Ngoc Tran Lam;Karl Aberer

  • Affiliations:
  • École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland;École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland;École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland;École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland

  • Venue:
  • Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

As the volumes of AI problems involving human knowledge are likely to soar, crowdsourcing has become essential in a wide range of world-wide-web applications. One of the biggest challenges of crowdsourcing is aggregating the answers collected from crowd workers; and thus, many aggregate techniques have been proposed. However, given a new application, it is difficult for users to choose the best-suited technique as well as appropriate parameter values since each of these techniques has distinct performance characteristics depending on various factors (e.g. worker expertise, question difficulty). In this paper, we develop a benchmarking tool that allows to (i) simulate the crowd and (ii) evaluate aggregate techniques in different aspects (accuracy, sensitivity to spammers, etc.). We believe that this tool will be able to serve as a practical guideline for both researchers and software developers. While researchers can use our tool to assess existing or new techniques, developers can reuse its components to reduce the development complexity.