Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks

  • Authors:
  • M. Jabri;B. Flower

  • Affiliations:
  • Sch. of Electr. Eng., Sydney Univ., NSW;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations. It is shown that this technique (which is called `weight perturbation') is suitable for multilayer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is presented