Toward a higher-level visual representation for object-based image retrieval

  • Authors:
  • Yan-Tao Zheng;Shi-Yong Neo;Tat-Seng Chua;Qi Tian

  • Affiliations:
  • National University of Singapore, Singapore, Singapore;National University of Singapore, Singapore, Singapore;National University of Singapore, Singapore, Singapore;Institute for Infocomm Research, Singapore, Singapore

  • Venue:
  • The Visual Computer: International Journal of Computer Graphics
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a higher-level visual representation, visual synset, for object-based image retrieval beyond visual appearances. The proposed visual representation improves the traditional part-based bag-of-words image representation, in two aspects. First, the approach strengthens the discrimination power of visual words by constructing an intermediate descriptor, visual phrase, from frequently co-occurring visual word-set. Second, to bridge the visual appearance difference or to achieve better intra-class invariance power, the approach clusters visual words and phrases into visual synset, based on their class probability distribution. The rationale is that the distribution of visual word or phrase tends to peak around its belonging object classes. The testing on Caltech-256 data set shows that the visual synset can partially bridge visual differences of images of the same class and deliver satisfactory retrieval of relevant images with different visual appearances.