Adding learning to cellular genetic algorithms for training recurrent neural networks

  • Authors:
  • K. W.C. Ku;Man Wai Mak;Wan Chi Siu

  • Affiliations:
  • Dept. of Electron. & Inf. Eng., Hong Kong Polytech., Kowloon;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GA) for training recurrent neural nets (RNN). Each RNN weight is encoded as a floating point number, and a concatenation of numbers forms a chromosome. Reproduction takes place locally in a square grid, each grid point representing a chromosome. Lamarckian and Baldwinian (1896) mechanisms for combining cellular GA and learning are compared. Different hill-climbing algorithms are incorporated into the cellular GA. These include the real-time recurrent learning (RTRL) and its simplified versions, and the delta rule. RTRL has been successively simplified by freezing some of the weights to form simplified versions. The delta rule, the simplest form of learning, has been implemented by considering the RNN as feedforward networks. The hybrid algorithms are used to train the RNN to solve a long-term dependency problem. The results show that Baldwinian learning is inefficient in assisting the cellular GA. It is conjectured that the more difficult it is for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning, the poorer is the convergence of Baldwinian learning. Most of the combinations using the Lamarckian mechanism show an improvement in reducing the number of generations for an optimum network; however, only a few can reduce the actual time taken. Embedding the delta rule in the cellular GA is the fastest method. Learning should not be too extensive