Embedding spatial context information into inverted filefor large-scale image retrieval

  • Authors:
  • Zhen Liu;Houqiang Li;Wengang Zhou;Qi Tian

  • Affiliations:
  • University of Science and Technology of China, Hefei, China;University of Science and Technology of China, Hefei, China;University of Texas at San Antonio, Texas, USA;University of Texas at San Antonio, Texas, USA

  • Venue:
  • Proceedings of the 20th ACM international conference on Multimedia
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

One most popular approach for large-scale content-based image retrieval is based on the Bag-of-Visual-Words model. Since the spatial context among local features is very important for visual content identification, many approaches index local features' geometric clues, such as location, scale and orientation for post-verification. To obtain consistent accuracy performance, the amount of top ranked images that post-verification approach needs to process is proportional to the image database size. When the database is very large, the verified images will be too many to be processed in real-time response. To address this issue, in this paper, we explore two approaches to embed spatial context information into the inverted file. The first one is to build a spatial relationship dictionary embedded with spatial context among local features, which we call one-one spatial relationship method. The second one is to generate a spatial context binary signature for each feature, which we call one-multiple spatial relationship method. Then we build an inverted file with spatial information between local features. The geometric verification is implicitly achieved while traversing the inverted file. Experimental results on benchmark Holidays dataset demonstrate the efficiency of the proposed algorithm.