GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems
SIAM Journal on Scientific and Statistical Computing
Hybrid Krylov methods for nonlinear systems of equations
SIAM Journal on Scientific and Statistical Computing
Stabilization of unstable procedures: the recursive projection method
SIAM Journal on Numerical Analysis
The superlinear convergence behaviour of GMRES
Journal of Computational and Applied Mathematics
A Restarted GMRES Method Augmented with Eigenvectors
SIAM Journal on Matrix Analysis and Applications
Restarted GMRES preconditioned by deflation
Journal of Computational and Applied Mathematics
Analysis of Augmented Krylov Subspace Methods
SIAM Journal on Matrix Analysis and Applications
A Note on the Superlinear Convergence of GMRES
SIAM Journal on Numerical Analysis
Developments and trends in the parallel solution of linear systems
Parallel Computing - Special Anniversary issue
Mesh Independence of Matrix-Free Methods for Path Following
SIAM Journal on Scientific Computing
On the Convergence of Restarted Krylov Subspace Methods
SIAM Journal on Matrix Analysis and Applications
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
BiCGStab, VPAStab and an adaptation to mildly nonlinear systems
Journal of Computational and Applied Mathematics
Finite element computations on cluster of PC's and workstations
EURO-PDP'00 Proceedings of the 8th Euromicro conference on Parallel and distributed processing
Accelerating an inexact Newton/GMRES scheme by subspace decomposition
Applied Numerical Mathematics
Hi-index | 7.29 |
A method for simultaneous solution of large and sparse linearized equation sets and the corresponding eigenvalue problems is presented. Such problems arise from the discretization and the solution of nonlinear problems with the finite element method and Newton iteration. The method is based on a parallel version of the preconditioned GMRES(m) by deflation. The parallel code exploits the architecture of the computational clusters using the MPI (Message Passing Interface). The convergence rate, the parallel speedup and the memory requirements of the proposed method are reported and evaluated.