Splitting complex temporal questions for question answering systems
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Temporal Semantics Extraction for Improving Web Search
DEXA '09 Proceedings of the 2009 20th International Workshop on Database and Expert Systems Application
The MIRACLE team at the CLEF 2008 multilingual question answering track
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
Overview of ResPubliQA 2009: question answering evaluation over European legislation
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Hi-index | 0.00 |
This paper summarizes the participation of the MIRACLE team in the Multilingual Question Answering Track at CLEF 2009. In this campaign, we took part in the monolingual Spanish task at ResPubliQA and submitted two runs. We have adapted our QA system to the new JRC-Acquis collection and the legal domain. We tested the use of answer filtering and ranking techniques against a baseline system using passage retrieval with no success. The run using question analysis and passage retrieval obtained a global accuracy of 0.33, while the addition of an answer filtering resulted in 0.29. We provide an analysis of the results for different questions types to investigate why it is difficult to leverage previous QA techniques. Another task of our work has been the application of temporal management to QA. Finally we include some discussion of the problems found with the new collection and the complexities of the domain.