Matrix computations (3rd ed.)
Iterative methods for solving linear systems
Iterative methods for solving linear systems
A Multilinear Singular Value Decomposition
SIAM Journal on Matrix Analysis and Applications
On the Best Rank-1 and Rank-(R1,R2,. . .,RN) Approximation of Higher-Order Tensors
SIAM Journal on Matrix Analysis and Applications
Optimal Kronecker Product Approximation of Block Toeplitz Matrices
SIAM Journal on Matrix Analysis and Applications
SIAM Journal on Matrix Analysis and Applications
Algorithms for Numerical Analysis in High Dimensions
SIAM Journal on Scientific Computing
Toeplitz And Circulant Matrices: A Review (Foundations and Trends(R) in Communications and Information Theory)
Approximate iterations for structured matrices
Numerische Mathematik
Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem
SIAM Journal on Matrix Analysis and Applications
Multigrid Accelerated Tensor Approximation of Function Related Multidimensional Arrays
SIAM Journal on Scientific Computing
Tensor Decompositions and Applications
SIAM Review
Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions
SIAM Journal on Scientific Computing
AICT'11 Proceedings of the 2nd international conference on Applied informatics and computing theory
Array flattening based univariate high dimensional model representation (AFBUHDMR)
AICT'11 Proceedings of the 2nd international conference on Applied informatics and computing theory
Algebraic Wavelet Transform via Quantics Tensor Train Decomposition
SIAM Journal on Scientific Computing
Wedderburn Rank Reduction and Krylov Subspace Method for Tensor Approximation. Part 1: Tucker Case
SIAM Journal on Scientific Computing
Superfast solution of linear convolutional Volterra equations using QTT approximation
Journal of Computational and Applied Mathematics
Hi-index | 0.00 |
A new method for structured representation of matrices and vectors is presented. The method is based on the representation of a matrix as a $d$-dimensional tensor and applying the TT-decomposition proposed recently. It turned out that for many important cases the number of parameters to represent an $n\times n$ matrix falls down to $\mathcal{O}(\log^{\alpha}n)$, giving a logarithmic storage. It is shown that this format can be used not only for storage reduction, but also for linear algebra operations. Possible applications include differential and integral equations, and data and image compression.