The Superlinear Convergence of a Modified BFGS-Type Method for Unconstrained Optimization
Computational Optimization and Applications
A new backtracking inexact BFGS method for symmetric nonlinear equations
Computers & Mathematics with Applications
A limited memory BFGS-type method for large-scale unconstrained optimization
Computers & Mathematics with Applications
Global convergence of the nonmonotone MBFGS method for nonconvex unconstrained minimization
Journal of Computational and Applied Mathematics
A globally convergent BFGS method with nonmonotone line search for non-convex minimization
Journal of Computational and Applied Mathematics
An inexact-restoration method for nonlinear bilevel programming problems
Computational Optimization and Applications
Convergence analysis of a modified BFGS method on convex minimizations
Computational Optimization and Applications
SIAM Journal on Optimization
A nonmonotone PSB algorithm for solving unconstrained optimization
Computational Optimization and Applications
A regularized limited memory BFGS method for nonconvex unconstrained minimization
Numerical Algorithms
Hi-index | 0.00 |
The BFGS method is one of the most famous quasi-Newton algorithms for unconstrained optimization. In 1984, Powell presented an example of a function of two variables that shows that the Polak--Ribière--Polyak (PRP) conjugate gradient method and the BFGS quasi-Newton method may cycle around eight nonstationary points if each line search picks a local minimum that provides a reduction in the objective function. In this paper, a new technique of choosing parameters is introduced, and an example with only six cyclic points is provided. It is also noted through the examples that the BFGS method with Wolfe line searches need not converge for nonconvex objective functions.