Contextual weighting for vocabulary tree based image retrieval

  • Authors:
  • Xiaoyu Wang; Ming Yang;Timothee Cour;Shenghuo Zhu; Kai Yu;Tony X. Han

  • Affiliations:
  • Dept. of ECE, Univ. of Missouri, Columbia, 65211, USA;NEC Laboratories America, Inc., Cupertino, CA 95014, USA;NEC Laboratories America, Inc., Cupertino, CA 95014, USA;NEC Laboratories America, Inc., Cupertino, CA 95014, USA;NEC Laboratories America, Inc., Cupertino, CA 95014, USA;Dept. of ECE, Univ. of Missouri, Columbia, 65211, USA

  • Venue:
  • ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we address the problem of image retrieval from millions of database images. We improve the vocabulary tree based approach by introducing contextual weighting of local features in both descriptor and spatial domains. Specifically, we propose to incorporate efficient statistics of neighbor descriptors both on the vocabulary tree and in the image spatial domain into the retrieval. These contextual cues substantially enhance the discriminative power of individual local features with very small computational overhead. We have conducted extensive experiments on benchmark datasets, i.e., the UKbench, Holidays, and our new Mobile dataset, which show that our method reaches state-of-the-art performance with much less computation. Furthermore, the proposed method demonstrates excellent scalability in terms of both retrieval accuracy and efficiency on large-scale experiments using 1.26 million images from the ImageNet database as distractors.