Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
Trust-region methods
A Filter-Trust-Region Method for Unconstrained Optimization
SIAM Journal on Optimization
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Cubic regularization of Newton method and its global performance
Mathematical Programming: Series A and B
Accelerating the cubic regularization of Newton’s method on convex problems
Mathematical Programming: Series A and B
Affine conjugate adaptive Newton methods for nonlinear elastomechanics
Optimization Methods & Software
Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
SIAM Journal on Optimization
Regularized Newton method for unconstrained convex optimization
Mathematical Programming: Series A and B - Series B - Special Issue: Nonsmooth Optimization and Applications
Mathematical Programming: Series A and B
Mathematical Programming: Series A and B
Complexity bounds for second-order optimality in unconstrained optimization
Journal of Complexity
SIAM Journal on Optimization
SIAM Journal on Optimization
Updating the regularization parameter in the adaptive cubic regularization algorithm
Computational Optimization and Applications
Inductive manifold learning using structured support vector machine
Pattern Recognition
Hi-index | 0.00 |
It is shown that the steepest-descent and Newton's methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to $O(\epsilon^{-2})$ to drive the norm of the gradient below $\epsilon$. This shows that the upper bound of $O(\epsilon^{-2})$ evaluations known for the steepest descent is tight and that Newton's method may be as slow as the steepest-descent method in the worst case. The improved evaluation complexity bound of $O(\epsilon^{-3/2})$ evaluations known for cubically regularized Newton's methods is also shown to be tight.