Overview of the INEX 2010 question answering track (QA@INEX)

  • Authors:
  • Eric SanJuan;Patrice Bellot;Véronique Moriceau;Xavier Tannier

  • Affiliations:
  • LIA, Université d'Avignon et des Pays de Vaucluse, France;LIA, Université d'Avignon et des Pays de Vaucluse, France;LIMSI, CNRS, University Paris-Sud 11, France;LIMSI, CNRS, University Paris-Sud 11, France

  • Venue:
  • INEX'10 Proceedings of the 9th international conference on Initiative for the evaluation of XML retrieval: comparative evaluation of focused retrieval
  • Year:
  • 2010

Quantified Score

Hi-index 0.02

Visualization

Abstract

The INEX Question Answering track (QA@INEX) aims to evaluate a complex question-answering task using the Wikipedia. The set of questions is composed of factoid, precise questions that expect short answers, as well as more complex questions that can be answered by several sentences or by an aggregation of texts from different documents. Long answers have been evaluated based on Kullback Leibler (KL) divergence between n-gram distributions. This allowed summarization systems to participate. Most of them generated a readable extract of sentences from top ranked documents by a state-of-the-art document retrieval engine. Participants also tested several methods of question disambiguation. Evaluation has been carried out on a pool of real questions from OverBlog and Yahoo! Answers. Results tend to show that the baseline-restricted focused IR system minimizes KL divergence but misses readability meanwhile summarization systems tend to use longer and standalone sentences thus improving readability but increasing KL divergence.