Using statistical testing in the evaluation of retrieval experiments
SIGIR '93 Proceedings of the 16th annual international ACM SIGIR conference on Research and development in information retrieval
The pragmatics of information retrieval experimentation, revisited
Readings in information retrieval
The TREC robust retrieval track
ACM SIGIR Forum
CLEF 2005: ad hoc track overview
CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
How robust are multilingual information retrieval systems?
Proceedings of the 2008 ACM symposium on Applied computing
How to compare bilingual to monolingual cross-language information retrieval
ECIR'07 Proceedings of the 29th European conference on IR research
Adding multilingual information access to the European library
DELOS'07 Proceedings of the 1st international conference on Digital libraries: research and development
Query recovery of short user queries: on query expansion with stopwords
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
FIDJI: using syntax for validating answers in multiple documents
Information Retrieval
Hi-index | 0.00 |
We describe the objectives and organization of the CLEF 2006 ad hoc track and discuss the main characteristics of the tasks offered to test monolingual, bilingual, and multilingual textual document retrieval systems. The track was divided into two streams. The main stream offered mono- and bilingual tasks using the same collections as CLEF 2005: Bulgarian, English, French, Hungarian and Portuguese. The second stream, designed for more experienced participants, offered the so-called "robust task" which used test collections from previous years in six languages (Dutch, English, French, German, Italian and Spanish) with the objective of privileging experiments which achieve good stable performance over all queries rather than high average performance. The performance achieved for each task is presented and the results are commented. The document collections used were taken from the CLEF multilingual comparable corpus of news documents.