Computational Optimization and Applications
Hi-index | 0.00 |
In numerical optimization, line-search and trust-region methods are two important classes of descent schemes, with well-understood global convergence properties. We say that these methods are “accelerated” when the conventional iterate is replaced by any point that produces at least as much of a decrease in the cost function as a fixed fraction of the decrease produced by the conventional iterate. A detailed convergence analysis reveals that global convergence properties of line-search and trust-region methods still hold when the methods are accelerated. The analysis is performed in the general context of optimization on manifolds, of which optimization in $\mathbb{R}^n$ is a particular case. This general convergence analysis sheds new light on the behavior of several existing algorithms.