Using random weights to train multilayer networks of hard-limiting units

  • Authors:
  • P. L. Barlett;T. Downs

  • Affiliations:
  • Dept. of Electr. Eng., Queensland Univ., St. Lucia, Qld.;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

A gradient descent algorithm suitable for training multilayer feedforward networks of processing units with hard-limiting output functions is presented. The conventional backpropagation algorithm cannot be applied in this case because the required derivatives are not available. However, if the network weights are random variables with smooth distribution functions, the probability of a hard-limiting unit taking one of its two possible values is a continuously differentiable function. In the paper, this is used to develop an algorithm similar to backpropagation, but for the hard-limiting case. It is shown that the computational framework of this algorithm is similar to standard backpropagation, but there is an additional computational expense involved in the estimation of gradients. Upper bounds on this estimation penalty are given. Two examples which indicate that, when this algorithm is used to train networks of hard-limiting units, its performance is similar to that of conventional backpropagation applied to networks of units with sigmoidal characteristics are presented