Evaluation of commonsense knowledge with Mechanical Turk

  • Authors:
  • Jonathan Gordon;Benjamin Van Durme;Lenhart K. Schubert

  • Affiliations:
  • University of Rochester, Rochester, NY;Johns Hopkins University, Baltimore, MD;University of Rochester, Rochester, NY

  • Venue:
  • CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Efforts to automatically acquire world knowledge from text suffer from the lack of an easy means of evaluating the resulting knowledge. We describe initial experiments using Mechanical Turk to crowdsource evaluation to non-experts for little cost, resulting in a collection of factoids with associated quality judgements. We describe the method of acquiring usable judgements from the public and the impact of such large-scale evaluation on the task of knowledge acquisition.