Comparison and combination of textual and visual features for interactive cross-language image retrieval

  • Authors:
  • Pei-Cheng Cheng;Jen-Yuan Yeh;Hao-Ren Ke;Been-Chian Chien;Wei-Pang Yang

  • Affiliations:
  • Department of Computer & Information Science, National Chiao Tung University, Hsinchu, Taiwan,R.O.C.;Department of Computer & Information Science, National Chiao Tung University, Hsinchu, Taiwan,R.O.C.;University Library, National Chiao Tung University, Hsinchu, Taiwan,R.O.C.;Department of Computer Science and Information Engineering, National University of Tainan, Tainan, Taiwan,R.O.C.;Department of Computer & Information Science, National Chiao Tung University, Hsinchu, Taiwan,R.O.C.

  • Venue:
  • CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper concentrates on the user-centered search task at ImageCLEF 2004. In this work, we combine both textual and visual features for cross-language image retrieval, and propose two interactive retrieval systems – T_ICLEF and VCT_ICLEF. The first one incorporates a relevance feedback mechanism based on textual information while the second one combines textual and image information to help users find a target image. The experimental results show that VCT_ICLEF had a better performance in almost all cases. Overall, it helped users find the topic image within a fewer iterations with a maximum of 2 iterations saved. Our user survey also reported that a combination of textual and visual information is helpful to indicate to the system what a user really wanted in mind.