Crowdsourcing for relevance evaluation

  • Authors:
  • Omar Alonso;Daniel E. Rose;Benjamin Stewart

  • Affiliations:
  • A9.com, Palo Alto, CA;A9.com, Palo Alto, CA;A9.com, Palo Alto, CA

  • Venue:
  • ACM SIGIR Forum
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in particular, conducting new editorial evaluations of a search system can be very expensive. We describe a new approach to evaluation called TERC, based on the crowdsourcing paradigm, in which many online users, drawn from a large community, each performs a small evaluation task.