Multi-level local descriptor quantization for bag-of-visterms image representation

  • Authors:
  • Pedro Quelhas;Jean-Marc Odobez

  • Affiliations:
  • IDIAP Research Institute Martigny, Switzerland;IDIAP Research Institute Martigny, Switzerland

  • Venue:
  • Proceedings of the 6th ACM international conference on Image and video retrieval
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the past, quantized local descriptors have been shown to be a good base for the representation of images, that can be applied to a wide range of tasks. However, current approaches typically consider only one level of quantization to create the final image representation. In this view they somehow restrict the image description to one level of visual detail. We propose to build image representations from multi-level quantization of local interest point descriptors, automatically extracted from the images. The use of this new multi-level representation will allow for the description of fine and coarse local image detail in one framework. To evaluate the performance of our approach we perform scene image classification using a 13-class data set. We show that the use of information from multiple quantization levels increases the classification performance, which suggests that the different granularity captured by the multi-level quantization produces a more discriminant image representation. Moreover, by using a multi-level approach, the time necessary to learn the quantization models can be reduced by learning the different models in parallel.