A Modified Rank One Update Which Converges Q-Superlinearly
Computational Optimization and Applications
A derivative-free nonmonotone line-search technique for unconstrained optimization
Journal of Computational and Applied Mathematics
Optimal r-order of an adjoint Broyden method without the assumption of linearly independent steps
Optimization Methods & Software - Dedicated to Professor Michael J.D. Powell on the occasion of his 70th birthday
A new trust region method with adaptive radius
Computational Optimization and Applications
The convergence of subspace trust region methods
Journal of Computational and Applied Mathematics
Sufficient descent directions in unconstrained optimization
Computational Optimization and Applications
A nonmonotone PSB algorithm for solving unconstrained optimization
Computational Optimization and Applications
Hi-index | 0.00 |
Several recent computational studies have shown that the symmetric rank-one (SR1) update is a very competitive quasi-Newton update in optimization algorithms. This paper gives a new analysis of a trust region SR1 method for unconstrained optimization and shows that the method has an $n+1$ step $q$-superlinear rate of convergence. The analysis makes neither of the assumptions of uniform linear independence of the iterates nor positive definiteness of the Hessian approximations that have been made in other recent analyses of SR1 methods. The trust region method that is analyzed is fairly standard, except that it includes the feature that the Hessian approximation is updated after all steps, including rejected steps. We also present computational results that show that this feature, safeguarded in a way that is consistent with the convergence analysis, does not harm the efficiency of the SR1 trust region method.