University and Hospitals of Geneva Participating at ImageCLEF 2007

  • Authors:
  • Xin Zhou;Julien Gobeill;Patrick Ruch;Henning Müller

  • Affiliations:
  • Medical Informatics Service, University and Hospitals of Geneva, Switzerland;Medical Informatics Service, University and Hospitals of Geneva, Switzerland;Medical Informatics Service, University and Hospitals of Geneva, Switzerland;Medical Informatics Service, University and Hospitals of Geneva, Switzerland and Business Information Systems, University of Applied Sciences, Sierre, Switzerland

  • Venue:
  • Advances in Multilingual and Multimodal Information Retrieval
  • Year:
  • 2008
  • The MedGIFT group at ImageCLEF 2008

    CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access

Quantified Score

Hi-index 0.00

Visualization

Abstract

This article describes the participation of the University and Hospitals of Geneva at three tasks of the 2007 ImageCLEF image retrieval benchmark. The visual retrieval techniques relied mainly on the GNU Image Finding Tool (GIFT) whereas multilingual text retrieval was performed by mapping the full text documents and the queries in a variety of languages onto MeSH (Medical Subject Headings) terms, using the EasyIR text retrieval engine for retrieval.For the visual tasks it becomes clear that the baseline GIFT runs do not have the same performance as more sophisticated techniques such as visual patch histograms do have. GIFT can be seen as a baseline for the visual retrieval as it has been used for the past four years in ImageCLEF in the same configuration. Whereas in 2004 the performance of GIFT was among the best systems it now is towards the lower end, showing the clear improvement in retrieval quality. Due to time constraints no further optimizations could be performed and no relevance feedback was used, one of the strong points of GIFT. The text retrieval runs have a good performance showing the effectiveness of the approach to map terms onto an ontology. Mixed runs are in performance slightly lower than the best text results alone, meaning that more care needs to be taken in combining runs. English is by far the language with the best results; even a mixed run of the three languages was lower in performance.