Adapting Self-Adaptive Parameters in Evolutionary Algorithms

  • Authors:
  • Ko-Hsin Liang;Xin Yao;Charles S. Newton

  • Affiliations:
  • School of Computer Science, University College, The University of New South Wales, Australian Defence Force Academy, Canberra, ACT, Australia 2600. liangk@cs.adfa.edu.au;School of Computer Science, The University of Birmingham Edgbaston, Birmingham B15 2TT, UK. x.yao@cs.bham.ac.uk;School of Computer Science, University College, The University of New South Wales, Australian Defence Force Academy, Canberra, ACT, Australia 2600. csn@cs.adfa.edu.au

  • Venue:
  • Applied Intelligence
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

The lognormal self-adaptation has been used extensively in evolutionary programming (EP) and evolution strategies (ES) to adjust the search step size for each objective variable. However, it was discovered in our previous study (K.-H. Liang, X. Yao, Y. Liu, C. Newton, and D. Hoffman, in iEvolutionary Programming VII. Proc. of the Seventh Annual Conference on Evolutionary Programming, vol. 1447, edited by V. Porto, N. Saravanan, D. Waagen, and A. Eiben, Lecture Notes in Computer Science, Springer: Berlin, pp. 291–300, 1998) that such self-adaptation may rapidly lead to a search step size that is far too small to explore the search space any further, and thus stagnates search. This is called ithe loss of step size control. It is necessary to use a lower bound of search step size to avoid this problem. Unfortunately, the optimal setting of lower bound is highly problem dependent. This paper first analyzes both theoretically and empirically how the step size control was lost. Then two schemes of dynamic lower bound are proposed. The schemes enable the EP algorithm to adjust the lower bound dynamically during evolution. Experimental results are presented to demonstrate the effectiveness and efficiency of the dynamic lower bound for a set of benchmark functions.