From text to image: generating visual query for image retrieval

  • Authors:
  • Wen-Cheng Lin;Yih-Chen Chang;Hsin-Hsi Chen

  • Affiliations:
  • Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan;Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan;Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan

  • Venue:
  • CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper explores the uses of visual features for cross-language access to an image collection. An approach which transforms textual queries into visual representations is proposed. The relationships between text and images are mined. We employ the mined relationships to construct visual queries from textual ones. The retrieval results using textual and visual queries are combined to generate the final ranked list. We conducted English monolingual and Chinese-English cross-language retrieval experiments. The performances are quite good. The average precision of English monolingual textual run is 0.6304. The performance of cross-lingual retrieval is about 70% of monolingual retrieval. Comparatively, the gain of the generated visual query is not significant. If only appropriate query terms are selected to generate visual query, retrieval performance could be increased.