A Multilinear Singular Value Decomposition
SIAM Journal on Matrix Analysis and Applications
On the Best Rank-1 and Rank-(R1,R2,. . .,RN) Approximation of Higher-Order Tensors
SIAM Journal on Matrix Analysis and Applications
SIAM Journal on Scientific Computing
SIAM Journal on Matrix Analysis and Applications
Algorithm 862: MATLAB tensor classes for fast algorithm prototyping
ACM Transactions on Mathematical Software (TOMS)
Approximate iterations for structured matrices
Numerische Mathematik
Tucker Dimensionality Reduction of Three-Dimensional Arrays in Linear Time
SIAM Journal on Matrix Analysis and Applications
SIAM Journal on Matrix Analysis and Applications
Fast Simultaneous Orthogonal Reduction to Triangular Matrices
SIAM Journal on Matrix Analysis and Applications
Journal of Computational and Applied Mathematics
Visualizing the evolution of social networks
EPIA'11 Proceedings of the 15th Portugese conference on Progress in artificial intelligence
SIAM Journal on Scientific Computing
Wedderburn Rank Reduction and Krylov Subspace Method for Tensor Approximation. Part 1: Tucker Case
SIAM Journal on Scientific Computing
Superfast solution of linear convolutional Volterra equations using QTT approximation
Journal of Computational and Applied Mathematics
Hi-index | 0.00 |
By a tensor problem in general, we mean one where all the data on input and output are given (exactly or approximately) in tensor formats, the number of data representation parameters being much smaller than the total amount of data. For such problems, it is natural to seek for algorithms working with data only in tensor formats maintaining the same small number of representation parameters—by the price of all results of computation to be contaminated by approximation (recompression) to occur in each operation. Since approximation time is crucial and depends on tensor formats in use, in this paper we discuss which are best suitable to make recompression inexpensive and reliable. We present fast recompression procedures with sublinear complexity with respect to the size of data and propose methods for basic linear algebra operations with all matrix operands in the Tucker format, mostly through calls to highly optimized level-3 BLAS/LAPACK routines. We show that for three-dimensional tensors the canonical format can be avoided without any loss of efficiency. Numerical illustrations are given for approximate matrix inversion via proposed recompression techniques.