The rate of convergence of conjugate gradients
Numerische Mathematik
Matrix computations (3rd ed.)
Restarted GMRES for Shifted Linear Systems
SIAM Journal on Scientific Computing
Global FOM and GMRES algorithms for matrix equations
Applied Numerical Mathematics
Galerkin Projection Methods for Solving Multiple Linear Systems
SIAM Journal on Scientific Computing
A Multilinear Singular Value Decomposition
SIAM Journal on Matrix Analysis and Applications
SIAM Journal on Scientific Computing
Iterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems
Algorithm 847: Spinterp: piecewise multilinear hierarchical sparse grid interpolation in MATLAB
ACM Transactions on Mathematical Software (TOMS)
Recycling Subspace Information for Diffuse Optical Tomography
SIAM Journal on Scientific Computing
Recycling Krylov Subspaces for Sequences of Linear Systems
SIAM Journal on Scientific Computing
SIAM Journal on Numerical Analysis
SIAM Journal on Numerical Analysis
Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem
SIAM Journal on Matrix Analysis and Applications
Tensor Decompositions and Applications
SIAM Review
Extended Krylov subspace for parameter dependent systems
Applied Numerical Mathematics
Krylov Subspace Methods for Linear Systems with Tensor Product Structure
SIAM Journal on Matrix Analysis and Applications
Hierarchical Singular Value Decomposition of Tensors
SIAM Journal on Matrix Analysis and Applications
Sparse Tensor Discretization of Elliptic sPDEs
SIAM Journal on Scientific Computing
Tensor-Structured Galerkin Approximation of Parametric and Stochastic Elliptic PDEs
SIAM Journal on Scientific Computing
SIAM Journal on Scientific Computing
Approximation rates for the hierarchical tensor format in periodic Sobolev spaces
Journal of Complexity
Hi-index | 0.00 |
We consider linear systems $A(\alpha) x(\alpha) = b(\alpha)$ depending on possibly many parameters $\alpha = (\alpha_1,\ldots,\alpha_p)$. Solving these systems simultaneously for a standard discretization of the parameter range would require a computational effort growing drastically with the number of parameters. We show that a much lower computational effort can be achieved for sufficiently smooth parameter dependencies. For this purpose, computational methods are developed that benefit from the fact that $x(\alpha)$ can be well approximated by a tensor of low rank. In particular, low-rank tensor variants of short-recurrence Krylov subspace methods are presented. Numerical experiments for deterministic PDEs with parametrized coefficients and stochastic elliptic PDEs demonstrate the effectiveness of our approach.