Matrix analysis
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Machine Learning
Matrix analysis and applied linear algebra
Matrix analysis and applied linear algebra
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
A Generalized Representer Theorem
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
A Unifying View of Sparse Approximate Gaussian Process Regression
The Journal of Machine Learning Research
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
Sparse least squares support vector training in the reduced empirical feature space
Pattern Analysis & Applications
Improved Nyström low-rank approximation and error analysis
Proceedings of the 25th international conference on Machine learning
An efficient algorithm for learning to rank from preference graphs
Machine Learning
A Sparse Regularized Least-Squares Preference Learning Algorithm
Proceedings of the 2008 conference on Tenth Scandinavian Conference on Artificial Intelligence: SCAI 2008
Efficient hold-out for subset of regressors
ICANNGA'09 Proceedings of the 9th international conference on Adaptive and natural computing algorithms
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
A study on reduced support vector machines
IEEE Transactions on Neural Networks
Optimizing the kernel in the empirical feature space
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The high computational costs of training kernel methods to solve nonlinear tasks limits their applicability. However, recently several fast training methods have been introduced for solving linear learning tasks. These can be used to solve nonlinear tasks by mapping the input data nonlinearly to a low-dimensional feature space. In this work, we consider the mapping induced by decomposing the Nyström approximation of the kernel matrix. We collect together prior results and derive new ones to show how to efficiently train, make predictions with and do cross-validation for reduced set approximations of learning algorithms, given an efficient linear solver. Specifically, we present an efficient method for removing basis vectors from the mapping, which we show to be important when performing cross-validation.