How reliable are the results of large-scale information retrieval experiments?
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Variations in relevance judgments and the measurement of retrieval effectiveness
Information Processing and Management: an International Journal
CLEF '00 Revised Papers from the Workshop of Cross-Language Evaluation Forum on Cross-Language Information Retrieval and Evaluation
European Research Letter: cross-language system evaluation: the CLEF campaigns
Journal of the American Society for Information Science and Technology
Cross-language retrieval using link-based language models
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Combining wikipedia-based concept models for cross-language retrieval
IRFC'10 Proceedings of the First international Information Retrieval Facility conference on Adbances in Multidisciplinary Retrieval
Hi-index | 0.00 |
The first CLEF campaign was a big success in attracting increased participation when compared to its predecessor, the TREC8 cross-language track. Both the number of participants and of experiments has grown considerably. This paper presents details of the various subtasks, and attempts to summarize the main results and research directions that were observed. Additionally, the CLEF collection is examined with respect to the completeness of its relevance assessments. The analysis indicates that the CLEF relevance assessments are of comparable quality to those of the well-known and trusted TREC ad-hoc collections.