Computing non-negative tensor factorizations
Optimization Methods & Software - Mathematical programming in data mining and machine learning
Singular value decompositions and low rank approximations of tensors
IEEE Transactions on Signal Processing
Approximation algorithms for tensor clustering
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Tensor ranks for the inversion of tensor-product binomials
Journal of Computational and Applied Mathematics
Computing symmetric rank for symmetric tensors
Journal of Symbolic Computation
Classification with sums of separable functions
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part I
SIAM Journal on Matrix Analysis and Applications
Geometric entanglement of symmetric states and the majorana representation
TQC'10 Proceedings of the 5th conference on Theory of quantum computation, communication, and cryptography
Hierarchical Singular Value Decomposition of Tensors
SIAM Journal on Matrix Analysis and Applications
Approximation of $2^d\times2^d$ Matrices Using Tensor Decomposition
SIAM Journal on Matrix Analysis and Applications
Dynamical Tensor Approximation
SIAM Journal on Matrix Analysis and Applications
SIAM Journal on Matrix Analysis and Applications
Quasi-Newton Methods on Grassmannians and Multilinear Approximations of Tensors
SIAM Journal on Scientific Computing
Algebraic Wavelet Transform via Quantics Tensor Train Decomposition
SIAM Journal on Scientific Computing
SIAM Journal on Scientific Computing
A tensor decomposition approach to data compression and approximation of ND systems
Multidimensional Systems and Signal Processing
On the best rank-1 approximation to higher-order symmetric tensors
Mathematical and Computer Modelling: An International Journal
A New Truncation Strategy for the Higher-Order Singular Value Decomposition
SIAM Journal on Scientific Computing
The Alternating Linear Scheme for Tensor Optimization in the Tensor Train Format
SIAM Journal on Scientific Computing
SIAM Journal on Matrix Analysis and Applications
SIAM Journal on Matrix Analysis and Applications
Wedderburn Rank Reduction and Krylov Subspace Method for Tensor Approximation. Part 1: Tucker Case
SIAM Journal on Scientific Computing
Low-Rank Tensor Krylov Subspace Methods for Parametrized Linear Systems
SIAM Journal on Matrix Analysis and Applications
SIAM Journal on Matrix Analysis and Applications
On determinants and eigenvalue theory of tensors
Journal of Symbolic Computation
Hybrid bilinear and trilinear models for exploratory analysis of three-way poisson counts
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
Full length article: An algorithm to find a maximum of a multilinear map over a product of spheres
Journal of Approximation Theory
Eigenvectors of tensors and algorithms for Waring decomposition
Journal of Symbolic Computation
Most Tensor Problems Are NP-Hard
Journal of the ACM (JACM)
An equi-directional generalization of adaptive cross approximation for higher-order tensors
Applied Numerical Mathematics
Approximation rates for the hierarchical tensor format in periodic Sobolev spaces
Journal of Complexity
Journal of Computational Physics
Computational Statistics & Data Analysis
Hi-index | 0.01 |
There has been continued interest in seeking a theorem describing optimal low-rank approximations to tensors of order 3 or higher that parallels the Eckart-Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-$r$ approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders, and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank-3 tensor has an optimal rank-2 approximation. The notable exceptions to this misbehavior are rank-1 tensors and order-2 tensors (i.e., matrices). In a more positive spirit, we propose a natural way of overcoming the ill-posedness of the low-rank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete low-dimensional examples as a first step toward more general results. To this end, we present a detailed analysis of equivalence classes of $2 \times 2 \times 2$ tensors, and we develop methods for extending results upward to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, it can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular, we make extensive use of the hyperdeterminant $\Delta$ on $\mathbb{R}^{2\times 2 \times 2}$.