Evaluating Textual Entailment Recognition for University Entrance Examinations

  • Authors:
  • Yusuke Miyao;Hideki Shima;Hiroshi Kanayama;Teruko Mitamura

  • Affiliations:
  • National Institute of Informatics;Carnegie Mellon University;IBM Research - Tokyo;Carnegie Mellon University

  • Venue:
  • ACM Transactions on Asian Language Information Processing (TALIP) - Special Issue on RITE
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The present article addresses an attempt to apply questions in university entrance examinations to the evaluation of textual entailment recognition. Questions in several fields, such as history and politics, primarily test the examinee’s knowledge in the form of choosing true statements from multiple choices. Answering such questions can be regarded as equivalent to finding evidential texts from a textbase such as textbooks and Wikipedia. Therefore, this task can be recast as recognizing textual entailment between a description in a textbase and a statement given in a question. We focused on the National Center Test for University Admission in Japan and converted questions into the evaluation data for textual entailment recognition by using Wikipedia as a textbase. Consequently, it is revealed that nearly half of the questions can be mapped into textual entailment recognition; 941 text pairs were created from 404 questions from six subjects. This data set is provided for a subtask of NTCIR RITE (Recognizing Inference in Text), and 16 systems from six teams used the data set for evaluation. The evaluation results revealed that the best system achieved a correct answer ratio of 56%, which is significantly better than a random choice baseline.