On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems

  • Authors:
  • C. Cartis;N. I. M. Gould;Ph. L. Toint

  • Affiliations:
  • coralia.cartis@ed.ac.uk;nick.gould@sftc.ac.uk;philippe.toint@fundp.ac.be

  • Venue:
  • SIAM Journal on Optimization
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is shown that the steepest-descent and Newton's methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to $O(\epsilon^{-2})$ to drive the norm of the gradient below $\epsilon$. This shows that the upper bound of $O(\epsilon^{-2})$ evaluations known for the steepest descent is tight and that Newton's method may be as slow as the steepest-descent method in the worst case. The improved evaluation complexity bound of $O(\epsilon^{-3/2})$ evaluations known for cubically regularized Newton's methods is also shown to be tight.