Location coding for mobile image retrieval

  • Authors:
  • Sam S. Tsai;David Chen;Gabriel Takacs;Vijay Chandrasekhar;Jatinder P. Singh;Bernd Girod

  • Affiliations:
  • Stanford University, Stanford, CA;Stanford University, Stanford, CA;Stanford University, Stanford, CA;Stanford University, Stanford, CA;Deutsche Telekom Inc. R&D Lab, Los Altos, CA;Stanford University, Stanford, CA

  • Venue:
  • Proceedings of the 5th International ICST Mobile Multimedia Communications Conference
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

For mobile image retrieval, efficient data transmission can be achieved by sending only the query features. Each query feature is composed of a descriptor and a location in the image. The former is used to find candidate matching images using a "bag-of-words" approach while the latter is used in a geometric consistency check to map features in the query image to corresponding features in the database image. We investigate how to compress the location information and how lossy compression affects the geometric consistency check. The location information is converted into a location histogram and a context-based arithmetic coding with location refinement method is then proposed to code the histogram. The effects of lossily compressing the location information are evaluated empirically in terms of the errors in corresponding features and the error of the estimated geometric transformation model. From our experiments, rates at ~5.1 bits per feature can achieve errors comparable to lossless coding. The proposed scheme achieves a 12.5x rate reduction compared to the floating point representation, and 2.8x rate reduction compared to a fixed point representation.