The algebraic eigenvalue problem
The algebraic eigenvalue problem
Inexact and preconditioned Uzawa algorithms for saddle point problems
SIAM Journal on Numerical Analysis - Special issue: the articles in this issue are dedicated to Seymour V. Parter
Gradient Method with Retards and Generalizations
SIAM Journal on Numerical Analysis
A class of gradient unconstrained minimization algorithms with adaptive stepsize
Journal of Computational and Applied Mathematics
On the Behavior of the Gradient Norm in the Steepest Descent Method
Computational Optimization and Applications
An Iterative Method with Variable Relaxation Parameters for Saddle-Point Problems
SIAM Journal on Matrix Analysis and Applications
Hi-index | 0.00 |
The gradient method for the symmetric positive definite linear system $$Ax=b$$ is as follows 1 $$x_{k + 1}=x_{k}-\alpha_{k} g_{k}$$ where $$g_{k}=Ax_{k}-b$$ is the residual of the system at xk and 驴k is the stepsize. The stepsize $$\alpha_{k} = \frac{2}{{\lambda_{1}+\lambda_{n}}}$$ is optimal in the sense that it minimizes the modulus $$||I - \alpha A||_{2}$$ , where 驴1 and 驴n are the minimal and maximal eigenvalues of A respectively. Since 驴1 and 驴n are unknown to users, it is usual that the gradient method with the optimal stepsize is only mentioned in theory. In this paper, we will propose a new stepsize formula which tends to the optimal stepsize as $$k \to \infty$$ . At the same time, the minimal and maximal eigenvalues, 驴1 and 驴n, of A and their corresponding eigenvectors can be obtained.