The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Geometry and invariance in kernel based methods
Advances in kernel methods
The Geometry of Algorithms with Orthogonality Constraints
SIAM Journal on Matrix Analysis and Applications
A Multilinear Singular Value Decomposition
SIAM Journal on Matrix Analysis and Applications
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
A Generalized Representer Theorem
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
On the influence of the kernel on the consistency of support vector machines
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Knowledge and Information Systems
Grassmann discriminant analysis: a unifying view on subspace-based learning
Proceedings of the 25th international conference on Machine learning
Tensor Decompositions and Applications
SIAM Review
Kernel-based learning from infinite dimensional 2-way tensors
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
Approximate Nearest Subspace Search
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Information Theory
Efficient penetration depth approximation using active learning
ACM Transactions on Graphics (TOG)
Hi-index | 0.00 |
Tensor-based techniques for learning allow one to exploit the structure of carefully chosen representations of data. This is a desirable feature in particular when the number of training patterns is small which is often the case in areas such as biosignal processing and chemometrics. However, the class of tensor-based models is somewhat restricted and might suffer from limited discriminative power. On a different track, kernel methods lead to flexible nonlinear models that have been proven successful in many different contexts. Nonetheless, a naive application of kernel methods does not exploit structural properties possessed by the given tensorial representations. The goal of this work is to go beyond this limitation by introducing non-parametric tensor-based models. The proposed framework aims at improving the discriminative power of supervised tensor-based models while still exploiting the structural information embodied in the data. We begin by introducing a feature space formed by multilinear functionals. The latter can be considered as the infinite dimensional analogue of tensors. Successively we show how to implicitly map input patterns in such a feature space by means of kernels that exploit the algebraic structure of data tensors. The proposed tensorial kernel links to the MLSVD and features an interesting invariance property; the approach leads to convex optimization and fits into the same primal-dual framework underlying SVM-like algorithms.