Unsupervised and supervised visual codes with restricted boltzmann machines

  • Authors:
  • Hanlin Goh;Nicolas Thome;Matthieu Cord;Joo-Hwee Lim

  • Affiliations:
  • Laboratoire d'Informatique de Paris 6, UMPC - Sorbonne Universités, France,Institute for Infocomm Research, A*STAR, Singapore,Image and Pervasive Access Laboratory, CNRS UMI 2955, France;Laboratoire d'Informatique de Paris 6, UMPC - Sorbonne Universités, France;Laboratoire d'Informatique de Paris 6, UMPC - Sorbonne Universités, France;Institute for Infocomm Research, A*STAR, Singapore,Image and Pervasive Access Laboratory, CNRS UMI 2955, France

  • Venue:
  • ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part V
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, the coding of local features (e.g. SIFT) for image categorization tasks has been extensively studied. Incorporated within the Bag of Words (BoW) framework, these techniques optimize the projection of local features into the visual codebook, leading to state-of-the-art performances in many benchmark datasets. In this work, we propose a novel visual codebook learning approach using the restricted Boltzmann machine (RBM) as our generative model. Our contribution is three-fold. Firstly, we steer the unsupervised RBM learning using a regularization scheme, which decomposes into a combined prior for the sparsity of each feature's representation as well as the selectivity for each codeword. The codewords are then fine-tuned to be discriminative through the supervised learning from top-down labels. Secondly, we evaluate the proposed method with the Caltech-101 and 15-Scenes datasets, either matching or outperforming state-of-the-art results. The codebooks are compact and inference is fast. Finally, we introduce an original method to visualize the codebooks and decipher what each visual codeword encodes.