A corpus-based relevance feedback approach to cross-language image retrieval

  • Authors:
  • Yih-Chen Chang;Wen-Cheng Lin;Hsin-Hsi Chen

  • Affiliations:
  • Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan;Department of Medical Informatics, Tzu Chi University, Hualien, Taiwan;Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan

  • Venue:
  • CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper regards images with captions as a cross-media parallel corpus, and presents a corpus-based relevance feedback approach to combine the results of visual and textual runs. Experimental results show that this approach performs well. Comparing with the mean average precision (MAP) of the initial visual retrieval, the MAP is increased from 8.29% to 34.25% after relevance feedback from cross-media parallel corpus. The MAP of cross-lingual image retrieval is increased from 23.99% to 39.77% if combining the results of textual run and visual run with relevance feedback. Besides, the monolingual experiments also show the consistent effects of this approach. The MAP of monolingual retrieval is improved from 39.52% to 50.53% when merging the results of the text and image queries.