Beyond bag of words: image representation in sub-semantic space

  • Authors:
  • Chunjie Zhang;Shuhui Wang;Chao Liang;Jing Liu;Qingming Huang;Haojie Li;Qi Tian

  • Affiliations:
  • University of Chinese Academy of Sciences, Beijing, China;Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China;School of Computer, Wuhan University, Wuhan, China;Institute of Automation, Chinese Academy of Sciences, Beijing, China;University of Chinese Academy of Sciences, Beijing, China;Dalian University of Technology, Dalian, China;University of Texas at San Antonio, San Antonio, USA

  • Venue:
  • Proceedings of the 21st ACM international conference on Multimedia
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Due to the semantic gap, the low-level features are not able to semantically represent images well. Besides, traditional semantic related image representation may not be able to cope with large inter class variations and are not very robust to noise. To solve these problems, in this paper, we propose a novel image representation method in the sub-semantic space. First, examplar classifiers are trained by separating each training image from the others and serve as the weak semantic similarity measurement. Then a graph is constructed by combining the visual similarity and weak semantic similarity of these training images. We partition this graph into visually and semantically similar sub-sets. Each sub-set of images are then used to train classifiers in order to separate this sub-set from the others. The learned sub-set classifiers are then used to construct a sub-semantic space based representation of images. This sub-semantic space is not only more semantically meaningful but also more reliable and resistant to noise. Finally, we make categorization of images using this sub-semantic space based representation on several public datasets to demonstrate the effectiveness of the proposed method.