Parser evaluation using textual entailments

  • Authors:
  • Deniz Yuret;Laura Rimell;Aydın Han

  • Affiliations:
  • Koç University, Istanbul, Turkey 34450;Computer Laboratory, Cambridge, UK CB3 0FD;Koç University, Istanbul, Turkey 34450

  • Venue:
  • Language Resources and Evaluation
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Parser Evaluation using Textual Entailments (PETE) is a shared task in the SemEval-2010 Evaluation Exercises on Semantic Evaluation. The task involves recognizing textual entailments based on syntactic information alone. PETE introduces a new parser evaluation scheme that is formalism independent, less prone to annotation error, and focused on semantically relevant distinctions. This paper describes the PETE task, gives an error analysis of the top-performing Cambridge system, and introduces a standard entailment module that can be used with any parser that outputs Stanford typed dependencies.