The LIMSI participation in the QAst 2009 track: experimenting on answer scoring

  • Authors:
  • Guillaume Bernard;Sophie Rosset;Olivier Galibert;Gilles Adda;Eric Bilinski

  • Affiliations:
  • LIMSI, CNRS;LIMSI, CNRS;Laboratoire National de Métrologie et d'Essai;LIMSI, CNRS;LIMSI, CNRS

  • Venue:
  • CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present in this paper the three LIMSI question-answering systems on speech transcripts which participated to the QAst 2009 evaluation. These systems are based on a complete and multi-level analysis of both queries and documents. These systems use an automatically generated research descriptor. A score based on those descriptors is used to select documents and snippets. Three different methods are tried to extract and score candidate answers, and we present in particular a tree transformation based ranking method. We participated to all the tasks and submitted 30 runs (for 24 sub-tasks). The evaluation results for manual transcripts range from 27% to 36% for accuracy depending on the task and from 20% to 29% for automatic transcripts.