Molecular conformations from distance matrices
Journal of Computational Chemistry
Gradient Method with Retards and Generalizations
SIAM Journal on Numerical Analysis
Inexact Preconditioned Conjugate Gradient Method with Inner-Outer Iteration
SIAM Journal on Scientific Computing
Multigrid
Computational Methods for Inverse Problems
Computational Methods for Inverse Problems
On the Behavior of the Gradient Norm in the Steepest Descent Method
Computational Optimization and Applications
Relaxed Steepest Descent and Cauchy-Barzilai-Borwein Method
Computational Optimization and Applications
Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming
Numerische Mathematik
On the asymptotic behaviour of some new gradient methods
Mathematical Programming: Series A and B
A control Liapunov function approach to generalized and regularized descent methods for zero finding
International Journal of Hybrid Intelligent Systems
Hi-index | 0.00 |
The steepest descent method for large linear systems is well-known to often converge very slowly, with the number of iterations required being about the same as that obtained by utilizing a gradient descent method with the best constant step size and growing proportionally to the condition number. Faster gradient descent methods must occasionally resort to significantly larger step sizes, which in turn yields a rather non-monotone decrease pattern in the residual vector norm.We show that such faster gradient descent methods in fact generate chaotic dynamical systems for the normalized residual vectors. Very little is required to generate chaos here: simply damping steepest descent by a constant factor close to 1 will do.Several variants of the family of faster gradient descent methods are investigated, both experimentally and analytically. The fastest practical methods of this family in general appear to be the known, chaotic, two-step ones. Our results also highlight the need of better theory for existing faster gradient descent methods.