Binary SIFT: towards efficient feature matching verification for image search

  • Authors:
  • Wengang Zhou;Houqiang Li;Meng Wang;Yijuan Lu;Qi Tian

  • Affiliations:
  • University of Texas at San Antonio, Texas, TX;University of Science and Technology of China, Hefei, P. R. China;Hefei University of Technology, P. R. China;Texas State University, Texas, TX;University of Texas at San Antonio, Texas, TX

  • Venue:
  • Proceedings of the 4th International Conference on Internet Multimedia Computing and Service
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, great advance has been made in large-scale content-based image search. Most state-of-the-art approaches are based on the Bag-of-Visual-Words model with local features, such as SIFT. Visual matching between images is obtained by vector quantization of local features. Two feature vectors from different images are considered as a match, if they are quantized to the same visual word, even though the L2-distance between them is large. Thus, it may introduce many false positive matches. To address this problem, in this paper, we propose to generate binary SIFT from the original SIFT descriptor. The L2-distance between original SIFT descriptors is demonstrated to be well kept with the metric of Hamming distance between the corresponding binary SIFT. Two feature vectors quantized to the same visual word are considered as a valid match only when the Hamming distance between their binary SIFT vectors is below a threshold. With our binary SIFT, most false positive matches can be effectively and efficiently identified and removed, which greatly improves the accuracy of large-scale image search. We evaluate the proposed approach by conducting partial-duplicate image search on a one-million image database. The experimental results demonstrate the effectiveness and efficiency of our scheme.