Journal of Optimization Theory and Applications
Line search algorithms with guaranteed sufficient decrease
ACM Transactions on Mathematical Software (TOMS)
Multi-step quasi-Newton methods for optimization
ICCAM'92 Proceedings of the fifth international conference on Computational and applied mathematics
Quadratic and Superlinear Convergence of the Huschens Method for NonlinearLeast Squares Problems
Computational Optimization and Applications
Testing Unconstrained Optimization Software
ACM Transactions on Mathematical Software (TOMS)
An Adaptive Nonlinear Least-Squares Algorithm
ACM Transactions on Mathematical Software (TOMS)
A modified BFGS method and its global convergence in nonconvex minimization
Journal of Computational and Applied Mathematics - Special issue on nonlinear programming and variational inequalities
Convergence Properties of Nonlinear Conjugate Gradient Methods
SIAM Journal on Optimization
A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
SIAM Journal on Optimization
Global Convergence Properties of Nonlinear Conjugate Gradient Methods with Modified Secant Condition
Computational Optimization and Applications
Multi-step nonlinear conjugate gradient methods for unconstrained minimization
Computational Optimization and Applications
Journal of Computational and Applied Mathematics
Hi-index | 7.29 |
In this paper, we deal with conjugate gradient methods for solving nonlinear least squares problems. Several Newton-like methods have been studied for solving nonlinear least squares problems, which include the Gauss-Newton method, the Levenberg-Marquardt method and the structured quasi-Newton methods. On the other hand, conjugate gradient methods are appealing for general large-scale nonlinear optimization problems. By combining the structured secant condition and the idea of Dai and Liao (2001) [20], the present paper proposes conjugate gradient methods that make use of the structure of the Hessian of the objective function of nonlinear least squares problems. The proposed methods are shown to be globally convergent under some assumptions. Finally, some numerical results are given.