An overview of parameter control methods by self-adaption in evolutionary algorithms
Fundamenta Informaticae
Evolution and Optimum Seeking: The Sixth Generation
Evolution and Optimum Seeking: The Sixth Generation
Contemporary Evolution Strategies
Proceedings of the Third European Conference on Advances in Artificial Life
Toward a theory of evolution strategies: Self-adaptation
Evolutionary Computation
Parameter control in evolutionary algorithms
IEEE Transactions on Evolutionary Computation
Convergence results for the (1, λ)-SA-ES using the theory of ϕ-irreducible Markov chains
Theoretical Computer Science
Mutative self-adaptation on the sharp and parabolic ridge
FOGA'07 Proceedings of the 9th international conference on Foundations of genetic algorithms
Theoretical analysis of evolutionary computation on continuously differentiable functions
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Application of evolutionary algorithms for model calibration
Proceedings of the 14th annual conference companion on Genetic and evolutionary computation
Optimizing distributed data access in grid environments by using artificial intelligence techniques
ISPA'07 Proceedings of the 5th international conference on Parallel and Distributed Processing and Applications
Hi-index | 0.00 |
This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.