Passage-level evidence in document retrieval
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Quantitative evaluation of passage retrieval algorithms for question answering
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Answering questions with an n-gram based passage retrieval engine
Journal of Intelligent Information Systems
Overview of ResPubliQA 2009: question answering evaluation over European legislation
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
N-gram vs. keyword-based passage retrieval for question answering
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Link analysis for representing and retrieving legal information
CICLing'13 Proceedings of the 14th international conference on Computational Linguistics and Intelligent Text Processing - Volume 2
Hi-index | 0.00 |
This report presents the work carried out at NLE Lab for the QA@CLEF-2009 competition. We used the JIRS passage retrieval system, which is based on redundancy, with the assumption that it is possible to find the response to a question in a large enough document collection. The retrieved passages are ranked depending on the number, length and position of the question n-gram structures found in the passages. The best results were obtained in monolingual English, while the worst results were obtained for French. We suppose the difference is due to the question style that varies considerably from one language to another.