Stochastic competitive learning

  • Authors:
  • B. Kosko

  • Affiliations:
  • Dept. of Electr. Eng., Univ. of Southern California, Los Angeles, CA

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1991

Quantified Score

Hi-index 0.00

Visualization

Abstract

Competitive learning systems are examined as stochastic dynamical systems. This includes continuous and discrete formulations of unsupervised, supervised, and differential competitive learning systems. These systems estimate an unknown probability density function from random pattern samples and behave as adaptive vector quantizers. Synaptic vectors, in feedforward competitive neural networks, quantize the pattern space and converge to pattern class centroids or local probability maxima. A stochastic Lyapunov argument shows that competitive synaptic vectors converge to centroids exponentially quickly and reduces competitive learning to stochastic gradient descent. Convergence does not depend on a specific dynamical model of how neuronal activations change. These results extend to competitive estimation of local covariances and higher order statistics