Modeling spatial and semantic cues for large-scale near-duplicated image retrieval

  • Authors:
  • Shiliang Zhang;Qi Tian;Gang Hua;Wengang Zhou;Qingming Huang;Houqiang Li;Wen Gao

  • Affiliations:
  • Key Lab of Intell. Info. Process. Inst. of Comput. Tech. CAS, Beijing 100190, China;Dept. of Computer Science, University of Texas at San Antonio, Texas, TX 78249, USA;IBM Watson Research Center, 19 Skyline Drive, 2S-D49 Hauthorne, NY 10523, USA;Dept. of EEIS, University of Science and Technology of China, Hefei, PR China;Graduate University of Chinese Academy of Sciences, No. 19A, Yuquan Road, Shijingshan District, Beijing 100049, China;Dept. of EEIS, University of Science and Technology of China, Hefei, PR China;Key Lab of Intell. Info. Process. Inst. of Comput. Tech. CAS, Beijing 100190, China

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bag-of-visual Words (BoW) image representation has been illustrated as one of the most promising solutions for large-scale near-duplicated image retrieval. However, the traditional visual vocabulary is created in an unsupervised way by clustering a large number of image local features. This is not ideal because it largely ignores the semantic and spatial contexts between local features. In this paper, we propose the geometric visual vocabulary which captures the spatial contexts by quantizing local features in bi-space, i.e., in descriptor space and orientation space. Then, we propose to capture the semantic context by learning a semantic-aware distance metric between local features, which could reasonably measure the semantic similarities between image patches, from which the local features are extracted. The learned distance is hence utilized to cluster the local features for semantic visual vocabulary generation. Finally, we combine the spatial and semantic contexts in a unified framework by extracting local feature groups, computing the spatial configurations between the local features inside the group, and learning a semantic-aware distance between groups. The learned group distance is then utilized to cluster the extracted local feature groups to generate a novel visual vocabulary, i.e., the contextual visual vocabulary. The proposed visual vocabularies, i.e., geometric visual vocabulary, semantic visual vocabulary and contextual visual vocabulary are tested in large-scale near-duplicated image retrieval applications. The geometric visual vocabulary and semantic visual vocabulary achieve better performance than the traditional visual vocabulary. Moreover, the contextual visual vocabulary, which combines both spatial and semantic clues outperforms the state-of-the-art bundled feature in both retrieval precision and efficiency.