Extracting and composing robust features with denoising autoencoders

  • Authors:
  • Pascal Vincent;Hugo Larochelle;Yoshua Bengio;Pierre-Antoine Manzagol

  • Affiliations:
  • Université de Montréal, Montral, Qubec, Canada;Université de Montréal, Montral, Qubec, Canada;Université de Montréal, Montral, Qubec, Canada;Université de Montréal, Montral, Qubec, Canada

  • Venue:
  • Proceedings of the 25th international conference on Machine learning
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.