Differential Evolution Training Algorithm for Feed-Forward Neural Networks

  • Authors:
  • Jarmo Ilonen;Joni-Kristian Kamarainen;Jouni Lampinen

  • Affiliations:
  • Laboratory of Information Processing, Lappeenranta University of Technology, P.O. Box 20, FIN-53851 Lappeenranta, Finland. e-mail: ilonen@lut.fi;Laboratory of Information Processing, Lappeenranta University of Technology, P.O. Box 20, FIN-53851 Lappeenranta, Finland. e-mail: jkamarai@lut.fi;Laboratory of Information Processing, Lappeenranta University of Technology, P.O. Box 20, FIN-53851 Lappeenranta, Finland.

  • Venue:
  • Neural Processing Letters
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

An evolutionary optimization method over continuous search spaces, differential evolution, has recently been successfully applied to real world and artificial optimization problems and proposed also for neural network training. However, differential evolution has not been comprehensively studied in the context of training neural network weights, i.e., how useful is differential evolution in finding the global optimum for expense of convergence speed. In this study, differential evolution has been analyzed as a candidate global optimization method for feed-forward neural networks. In comparison to gradient based methods, differential evolution seems not to provide any distinct advantage in terms of learning rate or solution quality. Differential evolution can rather be used in validation of reached optima and in the development of regularization terms and non-conventional transfer functions that do not necessarily provide gradient information.