Overview of the answer validation exercise 2008

  • Authors:
  • Álvaro Rodrigo;Anselmo Peñas;Felisa Verdejo

  • Affiliations:
  • Dpto. Lenguajes y Sistemas Informáticos, UNED;Dpto. Lenguajes y Sistemas Informáticos, UNED;Dpto. Lenguajes y Sistemas Informáticos, UNED

  • Venue:
  • CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
  • Year:
  • 2008

Quantified Score

Hi-index 0.01

Visualization

Abstract

The Answer Validation Exercise at the Cross Language Evaluation Forum (CLEF) is aimed at developing systems able to decide whether the answer of a Question Answering (QA) system is correct or not. We present here the exercise description, the changes in the evaluation with respect to the last edition and the results of this third edition (AVE 2008). Last year's changes allowed us to measure the possible gain in performance obtained by using AV systems as the selection method of QA systems. Then, in this edition we wanted to reward AV systems able to detect also if all the candidate answers to a question are incorrect. 9 groups have participated with 24 runs in 5 different languages, and compared with the QA systems, the results show an evidence of the potential gain that more sophisticated AV modules might introduce in the task of QA.