Introducing asymmetry into interneuron learning

  • Authors:
  • Colin Fyfe

  • Affiliations:
  • -

  • Venue:
  • Neural Computation
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

A review is given of a new artificial neural networkarchitecture in which the weights converge to the principalcomponent subspace. The weights learn by only simple Hebbianlearning yet require no clipping, normalization or weight decay.The net self-organizes using negative feedback of activation from aset of "interneurons" to the input neurons. By allowing thisnegative feedback from the interneurons to act on otherinterneurons we can introduce the necessary asymmetry to causeconvergence to the actual principal components. Simulations andanalysis confirm such convergence.