The use of random weights for the training of multilayer networks of neurons with Heaviside characteristics

  • Authors:
  • T. Downs;R. J. Gaynier

  • Affiliations:
  • Department of Electrical and Computer Engineering University of Queensland, St. Lucia, Qld. 4072, Australia;Department of Electrical and Computer Engineering University of Queensland, St. Lucia, Qld. 4072, Australia

  • Venue:
  • Mathematical and Computer Modelling: An International Journal
  • Year:
  • 1995

Quantified Score

Hi-index 0.98

Visualization

Abstract

Artificial neural networks have, in recent years, been very successfully applied in a wide range of areas. A major reason for this success has been the existence of a training algorithm called backpropagation. This algorithm relies upon the neural units in a network having input/output characteristics that are continuously differentiable. Such units are significantly less easy to implement in silicon than are neural units with Heaviside (step-function) characteristics. In this paper, we show how a training algorithm similar to backpropagation can be developed for 2-layer networks of Heaviside units by treating the network weights (i.e., interconnection strengths) as random variables. This is then used as a basis for the development of a training algorithm for networks with any number of layers by drawing upon the idea of internal representations. Some examples are given to illustrate the performance of these learning algorithms.