Invasive connectionist evolution

  • Authors:
  • Paulito P. Palmes;Shiro Usui

  • Affiliations:
  • RIKEN Brain Science Institute, Saitama, Japan;RIKEN Brain Science Institute, Saitama, Japan

  • Venue:
  • ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part III
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The typical automatic way to search for optimal neural network is to combine structure evolution by evolutionary computation and weight adaptation by backpropagation. In this model, since structure and weight optimizations are carried out by two different algorithms each using its own search space, every change in network topology during structure evolution requires relearning of the entire weights by backpropagation. Because of this inefficiency, we propose that the evolution of network structure and weights shall be purely stochastic and tightly integrated such that good weights and structures are not relearned but propagated from generation to generation. Since this model does not depend on gradient information, the entire process allows more flexibility in the implementation of its evolution and in the formulation of its fitness function. This study demonstrates how invasive connectionist evolution can easily be implemented using particle swarm optimization (PSO), evolutionary programming (EP), and differential evolution (DE) with good performances in cancer and glass classification tasks.