Some simple effective approximations to the 2-Poisson model for probabilistic weighted retrieval
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
A vector space model for automatic indexing
Communications of the ACM
Information retrieval baselines for the ResPubliQA task
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Overview of ResPubliQA 2009: question answering evaluation over European legislation
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Approaching question answering by means of paragraph validation
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Information retrieval baselines for the ResPubliQA task
CLEF'09 Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments
Question answering at the cross-language evaluation forum 2003---2010
Language Resources and Evaluation
Hi-index | 0.00 |
The baselines proposed for the ResPubliQA 2009 task are described in this paper. The main aim for designing these baselines was to test the performance of a pure Information Retrieval approach on this task. Two baselines were run for each of the eight languages of the task. Both baselines used the Okapi-BM25 ranking function, with and without a stemming. In this paper we extend the previous baselines comparing the BM25 model with Vector Space Model performance on this task. The results prove that BM25 outperforms VSM for all cases.