Training Neural Networks with Threshold Activation Functions and Constrained Integer Weights

  • Authors:
  • Affiliations:
  • Venue:
  • IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 5 - Volume 5
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Evolutionary neural network training algorithms are presented. These algorithms are applied to train neural networks with weight values confined to a narrow band of integers. We constrain the weights and biases in the range [-2k-1 + 1, 2k-1 - 1], for k = 3, 4, 5, thus they can be represented by just k bits. Such neural networks are better suited for hardware implementation than the real weight ones. Mathematical operations that are easy to implement in software might often be very burdensome in the hardware and therefore more costly. Hardware-friendly algorithms are essential to ensure the functionality and cost effectiveness of the hardware implementation. T o this end, in addition to the integer weights, the trained neural networks use threshold activation functions only, so hardware implementation is even easier. These algorithms have been designed keeping in mind that the resulting integer weights require fewer bits to be stored and the digital arithmetic operations between them are easier to be implemented in hardware. Obviously, if the network is trained in a constrained weight space, smaller weights are found and less memory is required. On the other hand, as we have found here, the network training procedure can be more effective and efficient when larger weights are allowed. Thus, for a given application a trade off between effectiveness and memory consumption has to be considered. Our intention is to present results of evolutionary algorithms on this difficult task. Based on the application of the proposed class of methods on classical neural network benchmarks, our experience is that these methods are effective and reliable.