Modeling the joint density of two images under a variety of transformations

  • Authors:
  • J. Susskind;R. Memisevic;G. Hinton;M. Pollefeys

  • Affiliations:
  • Inst. for Neural Comput., Univ. of California, San Diego, CA, USA;Dept. of Comput. Sci., Univ. of Frankfurt, Frankfurt, Germany;Dept. of Comput. Sci., Univ. of Toronto, Toronto, ON, Canada;Dept. of Comput. Sci., ETH Zurich, Zurich, Switzerland

  • Venue:
  • CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a generative model of the relationship between two images. The model is defined as a factored three-way Boltzmann machine, in which hidden variables collaborate to define the joint correlation matrix for image pairs. Modeling the joint distribution over pairs makes it possible to efficiently match images that are the same according to a learned measure of similarity. We apply the model to several face matching tasks, and show that it learns to represent the input images using task-specific basis functions. Matching performance is superior to previous similar generative models, including recent conditional models of transformations. We also show that the model can be used as a plug-in matching score to perform invariant classification.