CLEF 2008 ad-hoc track: comparing and combining different IR approaches

  • Authors:
  • Jens Kürsten;Thomas Wilhelm;Maximilian Eibl

  • Affiliations:
  • Chemnitz University of Technology, Faculty of Computer Science, Computer Science and Media, Chemnitz, Germany;Chemnitz University of Technology, Faculty of Computer Science, Computer Science and Media, Chemnitz, Germany;Chemnitz University of Technology, Faculty of Computer Science, Computer Science and Media, Chemnitz, Germany

  • Venue:
  • CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
  • Year:
  • 2008
  • CLEF 2008: ad hoc track overview

    CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access

Quantified Score

Hi-index 0.00

Visualization

Abstract

This article describes post workshop experiments that were conducted after our first participation at the TEL@CLEF task. We used the Xtrieval framework [5], [4] for the preparation and execution of the experiments. We ran 69 experiments in the setting of the CLEF 2008 task, whereof 39 were monolingual and 30 were cross-lingual. We investigated the capabilities of the current version of Xtrieval, which could use the two retrieval cores Lucene and Lemur from now on. Our main goal was to compare and combine the results from those retrieval engines. The translation of the topics for the cross-lingual experiments was realized with a plug-in to access the Google AJAX language API. The performance of our monolingual experiments was better than the best experiments we submitted during the evaluation campaign. Our crosslingual experiments performed very well for all target collections and achieved between 87% and 100% of the monolingual retrieval effectiveness. The combination of the results from the Lucene and the Lemur retrieval core showed very consistent performance.