A New Gradient Method with an Optimal Stepsize Property

  • Authors:
  • Y. H. Dai;X. Q. Yang

  • Affiliations:
  • State Key Laboratory of Scientific and Engineering Computing, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Acade ...;Department of Applied Mathematics, The Hong Kong Polytechnic University, Kowloon

  • Venue:
  • Computational Optimization and Applications
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The gradient method for the symmetric positive definite linear system $$Ax=b$$ is as follows 1 $$x_{k + 1}=x_{k}-\alpha_{k} g_{k}$$ where $$g_{k}=Ax_{k}-b$$ is the residual of the system at xk and 驴k is the stepsize. The stepsize $$\alpha_{k} = \frac{2}{{\lambda_{1}+\lambda_{n}}}$$ is optimal in the sense that it minimizes the modulus $$||I - \alpha A||_{2}$$ , where 驴1 and 驴n are the minimal and maximal eigenvalues of A respectively. Since 驴1 and 驴n are unknown to users, it is usual that the gradient method with the optimal stepsize is only mentioned in theory. In this paper, we will propose a new stepsize formula which tends to the optimal stepsize as $$k \to \infty$$ . At the same time, the minimal and maximal eigenvalues, 驴1 and 驴n, of A and their corresponding eigenvectors can be obtained.