Linking images and keywords for semantics-based image retrieval

  • Authors:
  • A. Kutics;A. Nakagawa;K. Tanaka;M. Yamada;Y. Sanbe;S. Ohtsuka

  • Affiliations:
  • NTT Data Corp., Tokyo, Japan;NTT Data Corp., Tokyo, Japan;NTT Data Corp., Tokyo, Japan;NTT Data Corp., Tokyo, Japan;NTT Data Corp., Tokyo, Japan;NTT Data Corp., Tokyo, Japan

  • Venue:
  • ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 2
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the major problems with existing content-based image retrieval systems is that their objective similarity can rarely match the user's subjective and context-dependent similarity interpretation. We propose a novel approach to linking images and textual information to overcome this problem. First, salient image objects and also their structural and visual features are extracted. Next, keywords and images are linked in two stages: (1) by mapping low-level visual features of objects to related words using feature lexicons, and (2) by assigning words expressing higher-level semantic concepts to images on the basis of the feature-related words, lexical definitions, and the user's relevance feedback. Experimental results show that the user's retrieval semantics can be approximated better via this two-level multi-modality and also by supporting a large variety of querying and browsing schemes and thus higher-level interactivity.