Hardware-friendly Higher-Order Neural Network Training using Distributed Evolutionary Algorithms

  • Authors:
  • M. G. Epitropakis;V. P. Plagianakos;M. N. Vrahatis

  • Affiliations:
  • Computational Intelligence Laboratory (CI Lab), Department of Mathematics, University of Patras, GR-26110 Patras, Greece;Department of Computer Science and Biomedical Informatics, University of Central Greece, Papassiopoulou 2-4, GR-35100 Lamia, Greece;Computational Intelligence Laboratory (CI Lab), Department of Mathematics, University of Patras, GR-26110 Patras, Greece

  • Venue:
  • Applied Soft Computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we study the class of Higher-Order Neural Networks and especially the Pi-Sigma Networks. The performance of Pi-Sigma Networks is evaluated through several well known Neural Network Training benchmarks. In the experiments reported here, Distributed Evolutionary Algorithms are implemented for Pi-Sigma neural networks training. More specifically the distributed versions of the Differential Evolution and the Particle Swarm Optimization algorithms have been employed. To this end, each processor is assigned a subpopulation of potential solutions. The subpopulations are independently evolved in parallel and occasional migration is employed to allow cooperation between them. The proposed approach is applied to train Pi-Sigma Networks using threshold activation functions. Moreover, the weights and biases were confined to a narrow band of integers, constrained in the range [-32,32]. Thus, the trained Pi-Sigma neural networks can be represented by using 6bits. Such networks are better suited than the real weight ones for hardware implementation and to some extend are immune to low amplitude noise that possibly contaminates the training data. Experimental results suggest that the proposed training process is fast, stable and reliable and the distributed trained Pi-Sigma Networks exhibited good generalization capabilities.