Weight perturbation: An optimal architecture and learning technique for analog vlsi feedforward and recurrent multilayer networks

  • Authors:
  • Marwan Jabri;Barry Flower

  • Affiliations:
  • Systems Engineering and Design Automation Laboratory, School of Electrical Engineering, University of Sydney, Sydney, Australia;Systems Engineering and Design Automation Laboratory, School of Electrical Engineering, University of Sydney, Sydney, Australia

  • Venue:
  • Neural Computation
  • Year:
  • 1991

Quantified Score

Hi-index 0.00

Visualization

Abstract

Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms like backpropagation. Although backpropagation is efficient, its implementation in analog VLSI requires excessive computational hardware. In this paper we show that, for analog parallel implementations, the use of gradient descent with direct approximation of the gradient using weight perturbation instead of backpropagation significantly reduces hardware complexity. Gradient descent by weight perturbation eliminates the need for derivative and bidirectional circuits for on-chip learning, and access to the output states of neurons in hidden layers for off-chip learning. We also show that weight perturbation can be used to implement recurrent networks. A discrete level analog implementation showing the training of an XOR network as an example is described.