How bad are the BFGS and DFP methods when the objective function is quadratic?
Mathematical Programming: Series A and B
Updating conjugate directions by the BRGS formula
Mathematical Programming: Series A and B
Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
On the convergence property of the DFP algorithm
Annals of Operations Research
Variational quasi-Newton methods for unconstrained optimization
Journal of Optimization Theory and Applications
Updating of conjugate direction matrices using members of Broyden's family
Mathematical Programming: Series A and B
Family of optimally conditioned quasi-Newton updates for unconstrained optimization
Journal of Optimization Theory and Applications
Modifying the BFGS update by a new column scaling technique
Mathematical Programming: Series A and B
The least prior deviation quasi-Newton update
Mathematical Programming: Series A and B
A Feasible BFGS Interior Point Algorithm for Solving Convex Minimization Problems
SIAM Journal on Optimization
Convergence of the DFP algorithm without exact line search
Journal of Optimization Theory and Applications
Self-scaling variable metric algorithms for unconstrained minimization
Self-scaling variable metric algorithms for unconstrained minimization
Hi-index | 7.29 |
In this paper, we discuss the convergence of the DFP algorithm with revised search direction. Under some inexact line searches, we prove that the algorithm is globally convergent for continuously differentiable functions and the rate of convergence of the algorithm is one-step superlinear and n-step second order for uniformly convex objective functions.From the proof of this paper, we obtain the superlinear and n-step second-order convergence of the DFP algorithm for uniformly convex objective functions.