Linear convergence of the conjugate gradient method

  • Authors:
  • Harlan Crowder;Philip Wolfe

  • Affiliations:
  • IBM Thomas J. Watson Research Center, Yorktown Heights, New York;IBM Thomas J. Watson Research Center, Yorktown Heights, New York

  • Venue:
  • IBM Journal of Research and Development
  • Year:
  • 1972

Quantified Score

Hi-index 0.01

Visualization

Abstract

There are two procedures for applying the method of conjugate gradients to the problem of minimizing a convex, nonlinear function: the "continued" method, and the "restarted" method in which all the data except the best previous point are discarded, and the procedure is begun a new from that point. It is demonstrated by example that in the absence of the standard initial starting condition on a quadratic function, the continued conjugate gradient method will converge to the solution no better than linearly. Furthermore, it is shown that for a general nonlinear function, the nonrestarted conjugate gradient method converges no worse than linearly.