Convergence of the steepest descent method for minimizing quasiconvex functions
Journal of Optimization Theory and Applications
Simulated annealing algorithms for continuous global optimization: convergence conditions
Journal of Optimization Theory and Applications
Convergence of the simulated annealing algorithm for continuous global optimization
Journal of Optimization Theory and Applications
On the Global Convergence of the BFGS Method for Nonconvex Unconstrained Optimization Problems
SIAM Journal on Optimization
Theoretical Computer Science
Convergence results for the (1, λ)-SA-ES using the theory of ϕ-irreducible Markov chains
Theoretical Computer Science
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
The correlation-triggered adaptive variance scaling IDEA
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Algorithmic analysis of a basic evolutionary algorithm for continuous optimization
Theoretical Computer Science
Nonlinear Optimization
Why standard particle swarm optimisers elude a theoretical runtime analysis
Proceedings of the tenth ACM SIGEVO workshop on Foundations of genetic algorithms
Hi-index | 0.00 |
This paper investigates theoretically the convergence properties of the stochastic algorithms of a class including both CMAESs and EDAs on constrained minimization of continuously differentiable functions. We are interested in algorithms that do not get stuck on a slope of the function, but converge only to local optimal points. Convergence to a point that is neither a stationary point of the function nor a boundary point is evidence that the convergence properties are not well behaved. We investigate what properties are necessary/sufficient for the algorithm to avoid this type of behavior, i.e., what properties are necessary for the algorithm to converge only to local optimal points of the function. We also investigate the analogous conditions on the parameters of two variants of modern EC-based stochastic algorithms, namely, a CMAES employing rank-μ update and an EDA known as EMNAglobal. The comparison between the apparently similar two systems shows that they have significantly different theoretical behaviors. This result presents us with an insight into the way we design well-behaved optimization algorithms.