Search by mobile image based on visual and spatial consistency

  • Authors:
  • Xianglong Liu; Yihua Lou;Adams Wei Yu; Bo Lang

  • Affiliations:
  • State Key Laboratory of Software Development Environment, Beihang University, Beijing 100191, China;State Key Laboratory of Software Development Environment, Beihang University, Beijing 100191, China;State Key Laboratory of Software Development Environment, Beihang University, Beijing 100191, China;State Key Laboratory of Software Development Environment, Beihang University, Beijing 100191, China

  • Venue:
  • ICME '11 Proceedings of the 2011 IEEE International Conference on Multimedia and Expo
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Performance of state-of-the-art image retrieval systems has been improved significantly using bag-of-words approaches. After represented by visual words quantized from local features, images can be indexed and retrieved using scalable textual retrieval approaches. However, there exist at least two issues unsolved, especially for search by mobile images with large variations: (1) the loss of features discriminative power due to quantization; and (2) the underuse of spatial relationships among visual words. To address both issues, considering properties of mobile images, this paper presents a novel method coupling visual and spatial information consistently: to improve discriminative power, features of the query image are first grouped using both matched visual features and their spatial relationships; Then grouped features are softly matched to alleviate quantization loss. Experiments on both UKBench database and a collected database with more than one million images show that the proposed method achieves 10% improvement over the approach with a vocabulary tree and bundled feature method.