Using statistical testing in the evaluation of retrieval experiments
SIGIR '93 Proceedings of the 16th annual international ACM SIGIR conference on Research and development in information retrieval
The impact of evaluation on multilingual text retrieval
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Combining bidirectional translation and synonymy for cross-language information retrieval
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
CLEF 2005: ad hoc track overview
CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
CLEF 2006: ad hoc track overview
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Hi-index | 0.00 |
The study of cross-lingual Information Retrieval Systems (IRSs) and a deep analysis of system performances should provide guidelines, hints, and directions to drive the design and development of the next generation MultiLingual Information Access (MLIA) systems. In addition, effective tools for interpreting and comparing the experimental results should be made easily available to the research community. To this end, we propose a twofold methodology for the evaluation of Cross Language Information Retrieval (CLIR) systems: statistical analyses to provide MLIA researchers with quantitative and more sophisticated analysis techniques; and graphical tools to allow for a more qualitative comparison and an easier presentation of the results. We provide concrete examples about how the proposed methodology can be applied by studying the monolingual and bilingual tasks of the Cross-Language Evaluation Forum (CLEF) 2005 and 2006 campaigns.