Crowdsourcing inference-rule evaluation

  • Authors:
  • Naomi Zeichner;Jonathan Berant;Ido Dagan

  • Affiliations:
  • Bar-Ilan University, Ramat-Gan, Israel;Tel-Aviv University, Tel-Aviv, Israel;Bar-Ilan University, Ramat-Gan, Israel

  • Venue:
  • ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The importance of inference rules to semantic applications has long been recognized and extensive work has been carried out to automatically acquire inference-rule resources. However, evaluating such resources has turned out to be a non-trivial task, slowing progress in the field. In this paper, we suggest a framework for evaluating inference-rule resources. Our framework simplifies a previously proposed "instance-based evaluation" method that involved substantial annotator training, making it suitable for crowdsourcing. We show that our method produces a large amount of annotations with high inter-annotator agreement for a low cost at a short period of time, without requiring training expert annotators.