Original Contribution: Principal components, minor components, and linear neural networks

  • Authors:
  • Erkki Oja

  • Affiliations:
  • -

  • Venue:
  • Neural Networks
  • Year:
  • 1992

Quantified Score

Hi-index 0.01

Visualization

Abstract

Many neural network realizations have been recently proposed for the statistical technique of Principal Component Analysis (PCA). Explicit connections between numerical constrained adaptive algorithms and neural networks with constrained Hebbian learning rules are reviewed. The Stochastic Gradient Ascent (SGA) neural network is proposed and shown to be closely related to the Generalized Hebbian Algorithm (GHA). The SGA behaves better for extracting the less dominant eigenvectors. The SGA algorithm is further extended to the case of learning minor components. The symmetrical Subspace Network is known to give a rotated basis of the dominant eigenvector subspace, but usually not the true eigenvectors themselves. Two extensions are proposed: in the first one, each neuron has a scalar parameter which breaks the symmetry. True eigenvectors are obtained in a local and fully parallel learning rule. In the second one, the case of an arbitrary number of parallel neurons is considered, not necessarily less than the input vector dimension.