Effects of learning rate on the performance of the population based incremental learning algorithm

  • Authors:
  • Komla A. Folly;Ganesh K. Venayagamoorthy

  • Affiliations:
  • Department of Electrical Engineering, University of Cape Town, Cape Town, SA and Real-Time Power and Intelligent Systems Laboratory, Missouri University of Science and Technology, Rolla, MO;Real-Time Power and Intelligent Systems Laboratory, Missouri University of Science and Technology, Rolla, MO

  • Venue:
  • IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The effect of learning rate (LR) on the performance of a newly introduced evolutionary algorithm called population-based incremental learning (PHIL) is investigated in this paper. PHIL is a technique that combines a simple genetic algorithm (GA) with competitive learning (CL). Although CL is often studied in the context of artificial neural networks (ANNs), it plays a vital role in PHIL in that the idea of creating a prototype vector in learning vector quantization (LVQ) is central to PHIL. In PHIL, the crossover operator of GAs is abstracted away and the role of population is redefined. PHIL maintains a real-valued probability vector (PV) or prototype vector from which solutions are generated. The probability vector controls the random bitstrings generated by PHIL and is used to create other individuals through learning. The setting of the learning rate (LR) can greatly affect the performance of PHIL. However, the effect of the learning rate in PHIL is not yet fully understood. In this paper, PHIL is used to design power system stabilizers (PSSs) for a multimachine power system. Four cases studies with different learning rate patterns are investigated. These include fixed LR; purely adaptive LR; fixed LR followed by adaptive LR; and adaptive LR followed by fixed LR. It is shown that a smaller learning rate leads to more exploration of the algorithm which introduces more diversity in the population at the cost of slower convergence. On the other hand, a higher learning rate means more exploitation of the algorithm and hence, this could lead to a premature convergence in the case of fixed LR. Therefore, in setting the LR, a trade-off is needed between exploitation and exploration.