On deep generative models with applications to recognition

  • Authors:
  • M. Ranzato;J. Susskind;V. Mnih;G. Hinton

  • Affiliations:
  • Dept. of Comput. Sci., Univ. of Toronto, Toronto, ON, Canada;Inst. for Neural Comput., Univ. of California, San Diego, CA, USA;Dept. of Comput. Sci., Univ. of Toronto, Toronto, ON, Canada;Dept. of Comput. Sci., Univ. of Toronto, Toronto, ON, Canada

  • Venue:
  • CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The most popular way to use probabilistic models in vision is first to extract some descriptors of small image patches or object parts using well-engineered features, and then to use statistical learning tools to model the dependencies among these features and eventual labels. Learning probabilistic models directly on the raw pixel values has proved to be much more difficult and is typically only used for regularizing discriminative methods. In this work, we use one of the best, pixel-level, generative models of natural images-a gated MRF-as the lowest level of a deep belief network (DBN) that has several hidden layers. We show that the resulting DBN is very good at coping with occlusion when predicting expression categories from face images, and it can produce features that perform comparably to SIFT descriptors for discriminating different types of scene. The generative ability of the model also makes it easy to see what information is captured and what is lost at each level of representation.