Parallel evolutionary training algorithms for “hardware-friendly“ neural networks

  • Authors:
  • Vassilis P. Plagianakos;Michael N. Vrahatis

  • Affiliations:
  • Department of Mathematics and Artificial Intelligence Research Center–/UPAIRC, University of Patras, GR-26110 Patras, Greece (E-mail: vpp@math.upatras.gr);Department of Mathematics, University of Patras, GR-26110 Patras, Greece/ University of Patras Artificial Intelligence Research Center-UPAIRC (E-mail: vrahatis@math.upatras.gr)

  • Venue:
  • Natural Computing: an international journal
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, Parallel Evolutionary Algorithms for integer weightneural network training are presented. To this end, each processoris assigned a subpopulation of potential solutions. Thesubpopulations are independently evolved in parallel andoccasional migration is employed to allow cooperation betweenthem. The proposed algorithms are applied to train neural networksusing threshold activation functions and weight values confined toa narrow band of integers. We constrain the weights and biases inthe range [−3, 3], thus they can be represented by just 3 bits.Such neural networks are better suited for hardware implementationthan the real weight ones. These algorithms have been designedkeeping in mind that the resulting integer weights require lessbits to be stored and the digital arithmetic operations betweenthem are easier to be implemented in hardware. Another advantageof the proposed evolutionary strategies is that they are capableof continuing the training process ``on-chip'', if needed. Ourintention is to present results of parallel evolutionaryalgorithms on this difficult task. Based on the application of theproposed class of methods on classical neural network problems,our experience is that these methods are effective and reliable.