A symmetric linear neural network that learns principal components and their variances

  • Authors:
  • F. Peper;H. Noda

  • Affiliations:
  • Commun. Res. Lab., Japanese Minist. of Posts & Telecommun., Kobe;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a linear neural network for principal component analysis whose weight vector lengths converge to the variances of the principal components in the input data. The neural network breaks the symmetry in its learning process by the differences in weight vector lengths and, as opposed to other linear neural networks described in literature, does not need to assume any asymmetries in its structure to extract the principal components. We prove the asymptotic stability of a stationary solution of the network's learning equation. Simulations show that the set of weight vectors converge to this solution. Comparison of convergence speeds shows that in the simulations the proposed neural network is about as fast as Sanger's generalized Hebbian algorithm (GHA) network, the weighted subspace rule network of Oja et al., and Xu's LMSER network (weighted linear version)