Dynamic optimal learning rates of a certain class of fuzzy neuralnetworks and its applications with genetic algorithm

  • Authors:
  • Chi-Hsu Wang;Han-Leih Liu;Chin-Teng Lin

  • Affiliations:
  • Sch. of Microelectron. Eng., Griffith Univ., Brisbane, Qld.;-;-

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

The stability analysis of the learning rate for a two-layer neural network (NN) is discussed first by minimizing the total squared error between the actual and desired outputs for a set of training vectors. The stable and optimal learning rate, in the sense of maximum error reduction, for each iteration in the training (back propagation) process can therefore be found for this two-layer NN. It has also been proven in this paper that the dynamic stable learning rate for this two-layer NN must be greater than zero. Thus it Is guaranteed that the maximum error reduction can be achieved by choosing the optimal learning rate for the next training iteration. A dynamic fuzzy neural network (FNN) that consists of the fuzzy linguistic process as the premise part and the two-layer NN as the consequence part is then illustrated as an immediate application of our approach. Each part of this dynamic FNN has its own learning rate for training purpose. A genetic algorithm is designed to allow a more efficient tuning process of the two learning rates of the FNN. The objective of the genetic algorithm is to reduce the searching time by searching for only one learning rate, which is the learning rate of the premise part, in the FNN. The dynamic optimal learning rates of the two-layer NN can be found directly using our innovative approach. Several examples are fully illustrated and excellent results are obtained for the model car backing up problem and the identification of nonlinear first order and second order systems