Using visual dictionary to associate semantic objects in region-based image retrieval

  • Authors:
  • Rongrong Ji;Hongxun Yao;Zhen Zhang;Peifei Xu;Jicheng Wang

  • Affiliations:
  • School of Computer Science and Engineering Harbin Institute of Technology, Harbin, China;School of Computer Science and Engineering Harbin Institute of Technology, Harbin, China;School of Computer Science and Engineering Harbin Institute of Technology, Harbin, China;School of Computer Science and Engineering Harbin Institute of Technology, Harbin, China;School of Computer Science and Engineering Harbin Institute of Technology, Harbin, China

  • Venue:
  • ICIAR'07 Proceedings of the 4th international conference on Image Analysis and Recognition
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In spite of inaccurate segmentation, the performance of region-based image retrieval is still restricted by the diverse appearances of semantic-similar objects. On the contrary, humans' linguistic description of image objects can reveal object information at a higher level. Using partial annotated region collection as "visual dictionary", this paper proposes a semantic sensitive region retrieval framework using middle-level visual & textual object description. To achieve this goal, firstly, a partial image database is segmented into regions, which are manually annotated by keywords to construct a visual dictionary. Secondly, to associate appearance-diverse, semantic-similar objects together, a Bayesian reasoning approach is adopted to calculate the semantic similarity between two regions. This inference method utilizes the visual dictionary to bridge un-annotated images region together at semantic level. Based on this reasoning framework, both query-by-example and query-by-keyword user interfaces are provided to facilitate user query. Experimental comparisons of our method over Visual-only region matching method indicate its effectiveness in enhancing the performance of region retrieval and bridging the semantic gap.