Topics in matrix analysis
Two polynomial methods of calculating functions of symmetric matrices
USSR Computational Mathematics and Mathematical Physics
Krylov subspace methods for solving large Lyapunov equations
SIAM Journal on Numerical Analysis
Extended Krylov Subspaces: Approximation of the Matrix Square Root and Related Functions
SIAM Journal on Matrix Analysis and Applications
Matrix algorithms
Iterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems
Algorithms for Numerical Analysis in High Dimensions
SIAM Journal on Scientific Computing
A New Iterative Method for Solving Large-Scale Lyapunov Matrix Equations
SIAM Journal on Scientific Computing
Efficient MATLAB Computations with Sparse and Factored Tensors
SIAM Journal on Scientific Computing
Convergence Analysis of Projection Methods for the Numerical Solution of Large Lyapunov Equations
SIAM Journal on Numerical Analysis
Tensor Decompositions and Applications
SIAM Review
A Riemannian Optimization Approach for Computing Low-Rank Solutions of Lyapunov Equations
SIAM Journal on Matrix Analysis and Applications
An Error Analysis for Rational Galerkin Projection Applied to the Sylvester Equation
SIAM Journal on Numerical Analysis
Low-Rank Tensor Krylov Subspace Methods for Parametrized Linear Systems
SIAM Journal on Matrix Analysis and Applications
Hi-index | 0.02 |
The numerical solution of linear systems with certain tensor product structures is considered. Such structures arise, for example, from the finite element discretization of a linear PDE on a $d$-dimensional hypercube. Linear systems with tensor product structure can be regarded as linear matrix equations for $d=2$ and appear to be their most natural extension for $d2$. A standard Krylov subspace method applied to such a linear system suffers from the curse of dimensionality and has a computational cost that grows exponentially with $d$. The key to breaking the curse is to note that the solution can often be very well approximated by a vector of low tensor rank. We propose and analyze a new class of methods, so-called tensor Krylov subspace methods, which exploit this fact and attain a computational cost that grows linearly with $d$.