CUTE: constrained and unconstrained testing environment
ACM Transactions on Mathematical Software (TOMS)
A limited memory algorithm for bound constrained optimization
SIAM Journal on Scientific Computing
Lancelot: A FORTRAN Package for Large-Scale Nonlinear Optimization (Release A)
Lancelot: A FORTRAN Package for Large-Scale Nonlinear Optimization (Release A)
A New Gradient Method with an Optimal Stepsize Property
Computational Optimization and Applications
A new trust region method with adaptive radius
Computational Optimization and Applications
A Line Search Multigrid Method for Large-Scale Nonlinear Optimization
SIAM Journal on Optimization
The Chaotic Nature of Faster Gradient Descent Methods
Journal of Scientific Computing
A globally optimal tri-vector method to solve an ill-posed linear system
Journal of Computational and Applied Mathematics
Hi-index | 0.00 |
It is well known that the norm of the gradient may be unreliable as a stopping test in unconstrained optimization, and that it often exhibits oscillations in the course of the optimization. In this paper we present results descibing the properties of the gradient norm for the steepest descent method applied to quadratic objective functions. We also make some general observations that apply to nonlinear problems, relating the gradient norm, the objective function value, and the path generated by the iterates.