Matrix analysis
Topics in matrix analysis
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Regularization theory and neural networks architectures
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
SIAM Review
Sparseness of support vector machines
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Some Properties of Regularized Kernel Methods
The Journal of Machine Learning Research
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Learning the Kernel Function via Regularization
The Journal of Machine Learning Research
On Learning Vector-Valued Functions
Neural Computation
Constructing informative priors using transfer learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
Bounds for Linear Multi-Task Learning
The Journal of Machine Learning Research
Uncovering shared structures in multiclass classification
Proceedings of the 24th international conference on Machine learning
On the Representer Theorem and Equivalent Degrees of Freedom of SVR
The Journal of Machine Learning Research
Convex multi-task feature learning
Machine Learning
A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization
The Journal of Machine Learning Research
Exact Matrix Completion via Convex Optimization
Foundations of Computational Mathematics
The rademacher complexity of linear transformation classes
COLT'06 Proceedings of the 19th annual conference on Learning Theory
The Journal of Machine Learning Research
The consistency analysis of coefficient regularized classification with convex loss
WSEAS Transactions on Mathematics
Metric Learning for Estimating Psychological Similarities
ACM Transactions on Intelligent Systems and Technology (TIST)
Manifold-Regularized minimax probability machine
PSL'11 Proceedings of the First IAPR TC3 conference on Partially Supervised Learning
Bound the learning rates with generalized gradients
WSEAS Transactions on Signal Processing
Regularized learning in Banach spaces as an optimization problem: representer theorems
Journal of Global Optimization
Kernelization of matrix updates, when and how?
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Efficiently learning the preferences of people
Machine Learning
Vector-valued reproducing kernel Banach spaces with applications to multi-task learning
Journal of Complexity
Finite rank kernels for multi-task learning
Advances in Computational Mathematics
Learning with infinitely many features
Machine Learning
Kernel analysis on Grassmann manifolds for action recognition
Pattern Recognition Letters
Hi-index | 0.00 |
We consider a general class of regularization methods which learn a vector of parameters on the basis of linear measurements. It is well known that if the regularizer is a nondecreasing function of the L2 norm, then the learned vector is a linear combination of the input data. This result, known as the representer theorem, lies at the basis of kernel-based methods in machine learning. In this paper, we prove the necessity of the above condition, in the case of differentiable regularizers. We further extend our analysis to regularization methods which learn a matrix, a problem which is motivated by the application to multi-task learning. In this context, we study a more general representer theorem, which holds for a larger class of regularizers. We provide a necessary and sufficient condition characterizing this class of matrix regularizers and we highlight some concrete examples of practical importance. Our analysis uses basic principles from matrix theory, especially the useful notion of matrix nondecreasing functions.