A unified image retrieval framework on local visual and semantic concept-based feature spaces

  • Authors:
  • Md. Mahmudur Rahman;Prabir Bhattacharya;Bipin C. Desai

  • Affiliations:
  • Dept. of Computer Science & Software Engineering, Concordia University, Canada;Concordia Institute for Information Systems Engineering, Concordia University, Canada;Dept. of Computer Science & Software Engineering, Concordia University, Canada

  • Venue:
  • Journal of Visual Communication and Image Representation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a learning-based unified image retrieval framework to represent images in local visual and semantic concept-based feature spaces. In this framework, a visual concept vocabulary (codebook) is automatically constructed by utilizing self-organizing map (SOM) and statistical models are built for local semantic concepts using probabilistic multi-class support vector machine (SVM). Based on these constructions, the images are represented in correlation and spatial relationship-enhanced concept feature spaces by exploiting the topology preserving local neighborhood structure of the codebook, local concept correlation statistics, and spatial relationships in individual encoded images. Finally, the features are unified by a dynamically weighted linear combination of similarity matching scheme based on the relevance feedback information. The feature weights are calculated by considering both the precision and the rank order information of the top retrieved relevant images of each representation, which adapts itself to individual searches to produce effective results. The experimental results on a photographic database of natural scenes and a bio-medical database of different imaging modalities and body parts demonstrate the effectiveness of the proposed framework.