A Lamarckian Hybrid of Differential Evolution and Conjugate Gradients for Neural Network Training

  • Authors:
  • Krzysztof Bandurski;Wojciech Kwedlo

  • Affiliations:
  • Faculty of Computer Science, Bialystok University of Technology, Bialystok, Poland 15-351;Faculty of Computer Science, Bialystok University of Technology, Bialystok, Poland 15-351

  • Venue:
  • Neural Processing Letters
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The paper describes two schemes that follow the model of Lamarckian evolution and combine differential evolution (DE), which is a population-based stochastic global search method, with the local optimization algorithm of conjugate gradients (CG). In the first, each offspring is fine-tuned by CG before competing with their parents. In the other CG is used to improve both parents and offspring in a manner that is completely seamless for individuals that survive more than one generation. Experiments involved training weights of feed-forward neural networks to solve three synthetic and four real-life problems. In six out of seven cases the DE---CG hybrid, which preserves and uses information on each solution's local optimization process, outperformed two recent variants of DE.