The rate of convergence of conjugate gradients
Numerische Mathematik
Deflation of conjugate gradients with applications to boundary value problems
SIAM Journal on Numerical Analysis
A Deflated Version of the Conjugate Gradient Algorithm
SIAM Journal on Scientific Computing
Accuracy and Stability of Numerical Algorithms
Accuracy and Stability of Numerical Algorithms
Iterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems
A Class of Spectral Two-Level Preconditioners
SIAM Journal on Scientific Computing
Hi-index | 0.00 |
When solving the Symmetric Positive Definite (SPD) linear system Ax = b with the conjugate gradient method, the smallest eigenvalues in the matrix A often slow down the convergence. Consequently if the smallest eigenvalues in A could be somehow “removed”, the convergence may be improved. This observation is of importance even when a preconditioner is used, and some extra techniques might be investigated to improve furthermore the convergence rate of the conjugate gradient on the given preconditioned system. Several techniques have been proposed in the literature that either consist of updating the preconditioner or enforcing the conjugate gradient to work in the orthogonal complement of an invariant subspace associated with small eigenvalues. In this work, we compare the numerical efficiency, computational complexity, and sensitivity to the accuracy of the spectral information of the techniques presented in [1], [2] and [3]. A more detailed description of these approaches as well as other comparable techniques on a range of standard test problems is available in [4].