UAIC participation at QA@CLEF2008
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
Approaching question answering by means of paragraph validation
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
A trainable multi-factored QA system
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Overview of ResPubliQA 2009: question answering evaluation over European legislation
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Question answering at the cross-language evaluation forum 2003---2010
Language Resources and Evaluation
Hi-index | 0.00 |
2009 marked UAIC1's fourth consecutive participation at the QA@CLEF competition, with continually improving results. This paper describes UAIC's QA systems participating in the Ro-Ro and En-En tasks. Both systems adhered to the classical QA architecture, with an emphasis on simplicity and real time answers: only shallow parsing was used for question processing, the indexes used by the retrieval module were at coarse-grained paragraph and document levels, and the answer extraction component used simple patternbased rules and lexical similarity metrics for candidate answer ranking. The results obtained for this year's participation were greatly improved from those of our team's previous participations, with an accuracy of 54% on the EN-EN task and 47% on the RO-RO task.