Algorithm 500: Minimization of Unconstrained Multivariate Functions [E4]
ACM Transactions on Mathematical Software (TOMS)
Modified Two-Point Stepsize Gradient Methods for Unconstrained Optimization
Computational Optimization and Applications
The Barzilai and Borwein Gradient Method for the Large Scale Unconstrained Minimization Problem
SIAM Journal on Optimization
Convergence Properties of Nonlinear Conjugate Gradient Methods
SIAM Journal on Optimization
CUTEr and SifDec: A constrained and unconstrained testing environment, revisited
ACM Transactions on Mathematical Software (TOMS)
The Superlinear Convergence of a Modified BFGS-Type Method for Unconstrained Optimization
Computational Optimization and Applications
A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search
SIAM Journal on Optimization
Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent
ACM Transactions on Mathematical Software (TOMS)
New conjugacy condition and related new conjugate gradient methods for unconstrained optimization
Journal of Computational and Applied Mathematics
Scaled conjugate gradient algorithms for unconstrained optimization
Computational Optimization and Applications
Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization
Optimization Methods & Software
Two new conjugate gradient methods based on modified secant equations
Journal of Computational and Applied Mathematics
Two effective hybrid conjugate gradient algorithms based on modified BFGS updates
Numerical Algorithms
Computational Optimization and Applications
Hi-index | 7.29 |
Following the scaled conjugate gradient methods proposed by Andrei, we hybridize the memoryless BFGS preconditioned conjugate gradient method suggested by Shanno and the spectral conjugate gradient method suggested by Birgin and Martinez based on a modified secant equation suggested by Yuan, and propose two modified scaled conjugate gradient methods. The interesting features of our methods are applying the function values in addition to the gradient values and satisfying the sufficient descent condition for the generated search directions which leads to the global convergence for uniformly convex functions. Numerical comparisons between the implementations of one of our methods which generates descent search directions for general functions and an efficient scaled conjugate gradient method proposed by Andrei are made on a set of unconstrained optimization test problems from the CUTEr collection, using the performance profile introduced by Dolan and More.