Stochastic trapping in a solvable model of on-line independent component analysis

  • Authors:
  • Magnus Rattray

  • Affiliations:
  • Computer Science Department, University of Manchester, Manchester M13 9PL, U.K.

  • Venue:
  • Neural Computation
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Previous analytical studies of on-line independent component analysis (ICA) learning rules have focused on asymptotic stability and efficiency. In practice, the transient stages of learning are often more significant in determining the success of an algorithm. This is demonstrated here with an analysis of a Hebbian ICA algorithm, which can find a small number of nongaussian components given data composed of a linear mixture of independent source signals. An idealized data model is considered in which the sources comprise a number of nongaussian and gaussian sources, and a solution to the dynamics is obtained in the limit where the number of gaussian sources is infinite. Previous stability results are confirmed by expanding around optimal fixed points, where a closed-form solution to the learning dynamics is obtained. However, stochastic effects are shown to stabilize otherwise unstable suboptimal fixed points. Conditions required to destabilize one such fixed point are obtained for the case of a single nongaussian component, indicating that the initial learning rate η required to escape successfully is very low (η = O(N-2) where N is the data dimension), resulting in very slow learning typically requiring O(N3) iterations. Simulations confirm that this picture holds for a finite system.