Nonlinear Autoassociation Is Not Equivalent to PCA

  • Authors:
  • Nathalie Japkowicz;Stephen Jose Hanson;Mark A. Gluck

  • Affiliations:
  • Faculty of Computer Science, Dalhousie University, Halifax, Nova Scotia, Canada, B3H 1W5;Department of Psychology, Rutgers University, Newark, NJ 07102, U.S.A.;Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ 07102, U.S.A.

  • Venue:
  • Neural Computation
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

A common misperception within the neural network community is that even with nonlinearities in their hidden layer, autoassociators trained with backpropagation are equivalent to linear methods such as principal component analysis (PCA). Our purpose is to demonstrate that nonlinear autoassociators actually behave differently from linear methods and that they can outperform these methods when used for latent extraction, projection, and classification. While linear autoassociators emulate PCA, and thus exhibit a flat or unimodal reconstruction error surface, autoassociators with nonlinearities in their hidden layer learn domains by building error reconstruction surfaces that, depending on the task, contain multiple local valleys. This interpolation bias allows nonlinear autoassociators to represent appropriate classifications of nonlinear multimodal domains, in contrast to linear autoassociators, which are inappropriate for such tasks. In fact, autoassociators with hidden unit nonlinearities can be shown to perform nonlinear classification and nonlinear recognition.