Estimation of fitness landscape contours in EAs
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Embedded self-adaptation to escape from local optima
Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation
Statistical methods for convergence detection of multi-objective evolutionary algorithms
Evolutionary Computation
Bare bones particle swarm optimization with Gaussian or cauchy jumps
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
Analysis of a simple evolutionary algorithm for minimization in euclidean spaces
ICALP'03 Proceedings of the 30th international conference on Automata, languages and programming
Disturbed Exploitation compact Differential Evolution for limited memory optimization problems
Information Sciences: an International Journal
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Noise analysis compact genetic algorithm
EvoApplicatons'10 Proceedings of the 2010 international conference on Applications of Evolutionary Computation - Volume Part I
Compact Particle Swarm Optimization
Information Sciences: an International Journal
Hi-index | 0.00 |
Self-adaptive mutations are known to endow evolutionary algorithms (EA) with the ability of locating local optima quickly and accurately, whereas it was unknown whether these local optima are finally global optima provided that the EA runs long enough. In order to answer this question, it is assumed that the (1+1)-EA with self-adaptation is located in the vicinity P of a local solution with objective function value ε. In order to exhibit convergence to the global optimum with probability one, the EA must generate an offspring that is an element of the lower level set S containing all solutions (including a global one) with objective function value less than ε. In case of multimodal objective functions, these sets P and S are generally not adjacent, i.e., min{||x-y||:x∈P, y∈S}>0, so that the EA has to surmount the barrier of solutions with objective function values larger than ε by a lucky mutation. It will be proven that the probability of this event is less than one even under an infinite time horizon. This result implies that the EA can get stuck at a nonglobal optimum with positive probability. Some ideas of how to avoid this problem are discussed as well