A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
On Relative Loss Bounds in Generalized Linear Regression
FCT '99 Proceedings of the 12th International Symposium on Fundamentals of Computation Theory
A Generalized Representer Theorem
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Tracking the best linear predictor
The Journal of Machine Learning Research
Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection
The Journal of Machine Learning Research
Online kernel PCA with entropic matrix updates
Proceedings of the 24th international conference on Machine learning
Proceedings of the 24th international conference on Machine learning
A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization
The Journal of Machine Learning Research
When Is There a Representer Theorem? Vector Versus Matrix Regularizers
The Journal of Machine Learning Research
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Hi-index | 0.00 |
We define what it means for a learning algorithm to be kernelizable in the case when the instances are vectors, asymmetric matrices and symmetric matrices, respectively. We can characterize kernelizability in terms of an invariance of the algorithm to certain orthogonal transformations. If we assume that the algorithm's action relies on a linear prediction, then we can show that in each case the linear parameter vector must be a certain linear combination of the instances. We give a number of examples of how to apply our methods. In particular we show how to kernelize multiplicative updates for symmetric instance matrices.