GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems
SIAM Journal on Scientific and Statistical Computing
Extrapolation methods for vector sequences
SIAM Review
A theoretical comparison of the Arnoldi and GMRES algorithms
SIAM Journal on Scientific and Statistical Computing
Some results about vector extrapolation methods and related fixed-point iterations
Journal of Computational and Applied Mathematics
Analysis of some vector extrapolation methods for solving systems of linear equations
Numerische Mathematik
A comparative study on methods for convergence acceleration of iterative vector sequences
Journal of Computational Physics
Matrix computations (3rd ed.)
A family of preconditioned iterative solvers for sparse linear systems
A family of preconditioned iterative solvers for sparse linear systems
NITSOL: A Newton Iterative Solver for Nonlinear Systems
SIAM Journal on Scientific Computing
Design and Application of a Gradient-Weighted Moving Finite Element Code I: in One Dimension
SIAM Journal on Scientific Computing
Iterative Procedures for Nonlinear Integral Equations
Journal of the ACM (JACM)
A Restricted Additive Schwarz Preconditioner for General Sparse Linear Systems
SIAM Journal on Scientific Computing
Krylov Subspace Acceleration of Nonlinear Multigrid with Application to Recirculating Flows
SIAM Journal on Scientific Computing
Convergence acceleration during the 20th century
Journal of Computational and Applied Mathematics - Special issue on numerical analysis 2000 vol. II: interpolation and extrapolation
Vector extrapolation methods. applications and numerical comparison
Journal of Computational and Applied Mathematics - Special issue on numerical analysis 2000 vol. II: interpolation and extrapolation
Nonlinearly Preconditioned Inexact Newton Algorithms
SIAM Journal on Scientific Computing
Iterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems
Jacobian-free Newton-Krylov methods: a survey of approaches and applications
Journal of Computational Physics
Acceleration of the Schwarz Method for Elliptic Problems
SIAM Journal on Scientific Computing
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Matrix Methods in Data Mining and Pattern Recognition (Fundamentals of Algorithms)
Matrix Methods in Data Mining and Pattern Recognition (Fundamentals of Algorithms)
SVD based initialization: A head start for nonnegative matrix factorization
Pattern Recognition
KSSOLV—a MATLAB toolbox for solving the Kohn-Sham equations
ACM Transactions on Mathematical Software (TOMS)
Nonlinear Krylov and moving nodes in the method of lines
Journal of Computational and Applied Mathematics - Special issue on the method of lines: Dedicated to Keith Miller
On (essentially) non-oscillatory discretizations of evolutionary convection-diffusion equations
Journal of Computational Physics
Linearity-preserving flux correction and convergence acceleration for constrained Galerkin schemes
Journal of Computational and Applied Mathematics
A flux-corrected transport algorithm for handling the close-packing limit in dense suspensions
Journal of Computational and Applied Mathematics
Journal of Computational Physics
Hi-index | 0.02 |
This paper concerns an acceleration method for fixed-point iterations that originated in work of D. G. Anderson [J. Assoc. Comput. Mach., 12 (1965), pp. 547-560], which we accordingly call Anderson acceleration here. This method has enjoyed considerable success and wide usage in electronic structure computations, where it is known as Anderson mixing; however, it seems to have been untried or underexploited in many other important applications. Moreover, while other acceleration methods have been extensively studied by the mathematics and numerical analysis communities, this method has received relatively little attention from these communities over the years. A recent paper by H. Fang and Y. Saad [Numer. Linear Algebra Appl., 16 (2009), pp. 197-221] has clarified a remarkable relationship of Anderson acceleration to quasi-Newton (secant updating) methods and extended it to define a broader Anderson family of acceleration methods. In this paper, our goals are to shed additional light on Anderson acceleration and to draw further attention to its usefulness as a general tool. We first show that, on linear problems, Anderson acceleration without truncation is “essentially equivalent” in a certain sense to the generalized minimal residual (GMRES) method. We also show that the Type 1 variant in the Fang-Saad Anderson family is similarly essentially equivalent to the Arnoldi (full orthogonalization) method. We then discuss practical considerations for implementing Anderson acceleration and illustrate its performance through numerical experiments involving a variety of applications.