Overview of the CLEF 2004 multilingual question answering track

  • Authors:
  • Bernardo Magnini;Alessandro Vallin;Christelle Ayache;Gregor Erbach;Anselmo Peñas;Maarten de Rijke;Paulo Rocha;Kiril Simov;Richard Sutcliffe

  • Affiliations:
  • ITC-Irst, Trento, Italy;ITC-Irst, Trento, Italy;ELDA/ELRA, Paris, France;DFKI, Saarbrücken, Germany;Departamento de Lenguajes y Sistemas Informáticos, UNED, Madrid, Spain;Informatics Institute, University of Amsterdam, The Netherlands;Linguateca, Braga Node, Universidade do Minho, Portugal;IPP, Bulgarian Academy of Sciences, Sofia, Bulgaria;DLTG, University of Limerick, Ireland

  • Venue:
  • CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Following the pilot Question Answering Track at CLEF 2003, a new evaluation exercise for multilingual QA systems took place in 2004. This paper reports on the novelties introduced in the new campaign and on participants' results. Almost all the cross-language combinations between nine source languages and seven target languages were exploited to set up more than fifty different tasks, both monolingual and bilingual. New types of questions (How- questions and definition questions) were given as input to the participating systems, while just one exact answer per question was allowed as output. The evaluation exercise has highlighted some difficulties in assessing definition questions and can be improved in the future, but the overall analysis of submissions shows encouraging results.