A stable Richardson iteration method for complex linear systems
Numerische Mathematik
Polynomial approximation of functions of matrices and applications
Journal of Scientific Computing
Random number generation and quasi-Monte Carlo methods
Random number generation and quasi-Monte Carlo methods
Matrix computations (3rd ed.)
Modelling extremal events: for insurance and finance
Modelling extremal events: for insurance and finance
Iterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems
Asymptotic behaviour of a family of gradient algorithms in ℝd and Hilbert spaces
Mathematical Programming: Series A and B
Hi-index | 0.00 |
We consider the solution of linear systems of equations Ax=b, with A a symmetric positive-definite matrix in 驴 n脳n , through Richardson-type iterations or, equivalently, the minimization of convex quadratic functions (1/2)(Ax,x)驴(b,x) with a gradient algorithm. The use of step-sizes asymptotically distributed with the arcsine distribution on the spectrum of A then yields an asymptotic rate of convergence after kn iterations, k驴驴, that coincides with that of the conjugate-gradient algorithm in the worst case. However, the spectral bounds m and M are generally unknown and thus need to be estimated to allow the construction of simple and cost-effective gradient algorithms with fast convergence. It is the purpose of this paper to analyse the properties of estimators of m and M based on moments of probability measures 驴 k defined on the spectrum of A and generated by the algorithm on its way towards the optimal solution. A precise analysis of the behavior of the rate of convergence of the algorithm is also given. Two situations are considered: (i) the sequence of step-sizes corresponds to i.i.d. random variables, (ii) they are generated through a dynamical system (fractional parts of the golden ratio) producing a low-discrepancy sequence. In the first case, properties of random walk can be used to prove the convergence of simple spectral bound estimators based on the first moment of 驴 k . The second option requires a more careful choice of spectral bounds estimators but is shown to produce much less fluctuations for the rate of convergence of the algorithm.