Overview of ResPubliQA 2009: question answering evaluation over European legislation

  • Authors:
  • Anselmo Peñas;Pamela Forner;Richard Sutcliffe;Álvaro Rodrigo;Corina Forăscu;Iñaki Alegria;Danilo Giampiccolo;Nicolas Moreau;Petya Osenova

  • Affiliations:
  • UNED, Spain;CELCT, Italy;DLTG University of Limerick, Ireland;UNED, Spain;UAIC and RACAI, Romania;University of Basque Country, Spain;CELCT, Italy;ELDA, ELRA, France;BTB, Bulgaria

  • Venue:
  • CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes the first round of ResPubliQA, a Question Answering (QA) evaluation task over European legislation, proposed at the Cross Language Evaluation Forum (CLEF) 2009. The exercise consists of extracting a relevant paragraph of text that satisfies completely the information need expressed by a natural language question. The general goals of this exercise are (i) to study if the current QA technologies tuned for newswire collections and Wikipedia can be adapted to a new domain (law in this case); (ii) to move to a more realistic scenario, considering people close to law as users, and paragraphs as system output; (iii) to compare current QA technologies with pure Information Retrieval (IR) approaches; and (iv) to introduce in QA systems the Answer Validation technologies developed in the past three years. The paper describes the task in more detail, presenting the different types of questions, the methodology for the creation of the test sets and the new evaluation measure, and analyzing the results obtained by systems and the more successful approaches. Eleven groups participated with 28 runs. In addition, we evaluated 16 baseline runs (2 per language) based only in pure IR approach, for comparison purposes. Considering accuracy, scores were generally higher than in previous QA campaigns.