Efficient generalized conjugate gradient algorithms, Part 1: theory
Journal of Optimization Theory and Applications
Nonmonotone curvilinear line search methods for unconstrained optimization
Computational Optimization and Applications
Matrix computations (3rd ed.)
Numerical Experiences with New Truncated Newton Methodsin Large Scale Unconstrained Optimization
Computational Optimization and Applications
The symmetric eigenvalue problem
The symmetric eigenvalue problem
Convergence to Second Order Stationary Points in Inequality Constrained Optimization
Mathematics of Operations Research
Trust-region methods
A survey of truncated-Newton methods
Journal of Computational and Applied Mathematics - Special issue on numerical analysis 2000 Vol. IV: optimization and nonlinear equations
SIAM Journal on Optimization
Solving the Trust-Region Subproblem using the Lanczos Method
SIAM Journal on Optimization
Lanczos Algorithms for Large Symmetric Eigenvalue Computations, Vol. 1
Lanczos Algorithms for Large Symmetric Eigenvalue Computations, Vol. 1
CUTEr and SifDec: A constrained and unconstrained testing environment, revisited
ACM Transactions on Mathematical Software (TOMS)
Preconditioning Newton---Krylov methods in nonconvex large scale optimization
Computational Optimization and Applications
Hi-index | 0.00 |
In this paper we deal with the iterative computation of negative curvature directions of an objective function, within large scale optimization frameworks. In particular, suitable directions of negative curvature of the objective function represent an essential tool, to guarantee convergence to second order critical points. However, an "adequate" negative curvature direction is often required to have a good resemblance to an eigenvector corresponding to the smallest eigenvalue of the Hessian matrix. Thus, its computation may be a very difficult task on large scale problems. Several strategies proposed in literature compute such a direction relying on matrix factorizations, so that they may be inefficient or even impracticable in a large scale setting. On the other hand, the iterative methods proposed either need to store a large matrix, or they need to rerun the recurrence. On this guideline, in this paper we propose the use of an iterative method, based on a planar Conjugate Gradient scheme. Under mild assumptions, we provide theory for using the latter method to compute adequate negative curvature directions, within optimization frameworks. In our proposal any matrix storage is avoided, along with any additional rerun.