A family of hybrid conjugate gradient methods for unconstrained optimization

  • Authors:
  • Yu-Hong Dai

  • Affiliations:
  • State Key Laboratory of Scientific and Engineering Computing, Inst. of Comp. Math. and Sci./Eng. Comp., Acad. Math. and Sys. Sci., Ch. Acad. Sci., Beijing, Peoples Republic of China

  • Venue:
  • Mathematics of Computation
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. This paper proposes a three-parameter family of hybrid conjugate gradient methods. Two important features of the family are that (i) it can avoid the propensity of small steps, namely, if a small step is generated away from the solution point, the next search direction will be close to the negative gradient direction; and (ii) its descent property and global convergence are likely to be achieved provided that the line search satisfies the Wolfe conditions. Some numerical results with the family are also presented.