Adaptation in natural and artificial systems
Adaptation in natural and artificial systems
Genetic algorithms + data structures = evolution programs (3rd ed.)
Genetic algorithms + data structures = evolution programs (3rd ed.)
An introduction to genetic algorithms
An introduction to genetic algorithms
Population-Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning
Removing the Genetics from the Standard Genetic Algorithm
Removing the Genetics from the Standard Genetic Algorithm
Population-Based incremental with adaptive learning rate strategy
ICSI'12 Proceedings of the Third international conference on Advances in Swarm Intelligence - Volume Part I
A hybrid algorithm for constrained portfolio selection problems
Applied Intelligence
Hi-index | 0.00 |
The effect of learning rate (LR) on the performance of a newly introduced evolutionary algorithm called population-based incremental learning (PHIL) is investigated in this paper. PHIL is a technique that combines a simple genetic algorithm (GA) with competitive learning (CL). Although CL is often studied in the context of artificial neural networks (ANNs), it plays a vital role in PHIL in that the idea of creating a prototype vector in learning vector quantization (LVQ) is central to PHIL. In PHIL, the crossover operator of GAs is abstracted away and the role of population is redefined. PHIL maintains a real-valued probability vector (PV) or prototype vector from which solutions are generated. The probability vector controls the random bitstrings generated by PHIL and is used to create other individuals through learning. The setting of the learning rate (LR) can greatly affect the performance of PHIL. However, the effect of the learning rate in PHIL is not yet fully understood. In this paper, PHIL is used to design power system stabilizers (PSSs) for a multimachine power system. Four cases studies with different learning rate patterns are investigated. These include fixed LR; purely adaptive LR; fixed LR followed by adaptive LR; and adaptive LR followed by fixed LR. It is shown that a smaller learning rate leads to more exploration of the algorithm which introduces more diversity in the population at the cost of slower convergence. On the other hand, a higher learning rate means more exploitation of the algorithm and hence, this could lead to a premature convergence in the case of fixed LR. Therefore, in setting the LR, a trade-off is needed between exploitation and exploration.