The theory of evolution strategies
The theory of evolution strategies
Noisy Local Optimization with Evolution Strategies
Noisy Local Optimization with Evolution Strategies
A Comparison of Evolution Strategies with Other Direct Search Methods in the Presence of Noise
Computational Optimization and Applications
Efficiency and Mutation Strength Adaptation of the (mu, muI, lambda)-ES in a Noisy Environment
PPSN VI Proceedings of the 6th International Conference on Parallel Problem Solving from Nature
Reconsidering the progress rate theory for evolution strategies in finite dimensions
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Log-linear convergence and optimal bounds for the (1 + 1)-ES
EA'07 Proceedings of the Evolution artificielle, 8th international conference on Artificial evolution
Evolutionary optimization in uncertain environments-a survey
IEEE Transactions on Evolutionary Computation
Bandit-based estimation of distribution algorithms for noisy optimization: rigorous runtime analysis
LION'10 Proceedings of the 4th international conference on Learning and intelligent optimization
Handling expensive optimization with large noise
Proceedings of the 11th workshop proceedings on Foundations of genetic algorithms
EvoApplicatons'10 Proceedings of the 2010 international conference on Applications of Evolutionary Computation - Volume Part I
Computational Optimization and Applications
Noisy optimization complexity under locality assumption
Proceedings of the twelfth workshop on Foundations of genetic algorithms XII
Hi-index | 0.00 |
In this paper we investigate multiplicative noise models in the context of continuous optimization. We illustrate how some intrinsic properties of the noise model imply the failure of reasonable search algorithms for locating the optimum of the noiseless part of the objective function. Those findings are rigorously investigated on the (1 + 1)-ES for the minimization of the noisy sphere function. Assuming a lower bound on the support of the noise distribution, we prove that the (1 + 1)-ES diverges when the lower bound allows to sample negative fitness with positive probability and converges in the opposite case. We provide a discussion on the practical applications and non applications of those outcomes and explain the differences with previous results obtained in the limit of infinite search-space dimensionality.