Weighted Local Similarity Pattern as image similarity model incorporated in GA-based relevance feedback mechanism

  • Authors:
  • Zoran Stejić;Yasufumi Takama;Kaoru Hirota

  • Affiliations:
  • (Correspd. stejic@hrt.dis.titech.ac.jp) Dept. of Computnl. Intell. and Sys. Sci. (c/o Hirota Lab.), Interdiscip. Grad. Sch. of Sci. and Eng., Tokyo Inst. of Technol., Yokohama 226-8502, Japan. Tel ...;PREST, Japan Science and Technology Corporation (JST), Tokyo, Japan and Department of Electronic Systems Engineering, Tokyo Metropolitan Institute of Technology, Tokyo, Japan;Department of Electronic Systems Engineering, Tokyo Metropolitan Institute of Technology, Tokyo, Japan

  • Venue:
  • Intelligent Data Analysis
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Weighted Local Similarity Pattern (WLSP)} is proposed as a new image similarity model, which considers two fundamental properties of the human visual system: (1) saliency of regions within an image, and (2) saliency of features within each region. Furthermore, since both region and feature saliencies are context dependent, genetic algorithm (GA)-based relevance feedback mechanism is proposed to automatically infer the (sub-)optimal assignment of the two saliencies, based on the query image and the set of relevant images, provided by the user. None of the existing image similarity models considers both region and feature saliencies in a context-dependent sense, allowing their automatic inference. In addition, this paper is the first to explicitly discuss the implications of the region and feature saliency properties to the design of an image similarity model, in the framework of image retrieval. The proposed method - including the WLSP image similarity model and the GA-based relevance feedback mechanism - is evaluated on five test databases, with around 2,500 images, covering 62 semantic categories. Compared with eleven of the representative image similarity models, including three based on relevance feedback, the proposed model brings in average between 6% and 30% increase in the retrieval precision. Results suggest that considering region and feature saliencies in a context-dependent sense enables the image similarity model to more accurately capture the human similarity perception.