The nature of statistical learning theory
The nature of statistical learning theory
Reducing the run-time complexity in support vector machines
Advances in kernel methods
Large Margin Classification Using the Perceptron Algorithm
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Machine Learning
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Exact simplification of support vector solutions
The Journal of Machine Learning Research
Some greedy learning algorithms for sparse regression and classification with mercer kernels
The Journal of Machine Learning Research
Machine Learning
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Fast Kernel Classifiers with Online and Active Learning
The Journal of Machine Learning Research
A Direct Method for Building Sparse Kernel Learning Algorithms
The Journal of Machine Learning Research
Noise Tolerant Variants of the Perceptron Algorithm
The Journal of Machine Learning Research
The Forgetron: A Kernel-Based Perceptron on a Budget
SIAM Journal on Computing
The projectron: a bounded kernel-based Perceptron
Proceedings of the 25th international conference on Machine learning
Sparse kernel SVMs via cutting-plane training
Machine Learning
Bounded Kernel-Based Online Learning
The Journal of Machine Learning Research
Algorithms and Applications
IEEE Transactions on Signal Processing
Reducing SVM classification time using multiple mirror classifiers
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
We study methods for speeding up classification time of kernel-based classifiers. Existing solutions are based on explicitly seeking sparse classifiers during training, or by using budgeted versions of the classifier where one directly limits the number of basis vectors allowed. Here, we propose a more flexible alternative: instead of using the same basis vectors over the whole feature space, our solution uses different basis vectors in different parts of the feature space. At the core of our solution lies an optimization procedure that, given a set of basis vectors, finds a good partition of the feature space and good subsets of the existing basis vectors. Using this procedure repeatedly, we build trees whose internal nodes specify feature space partitions and whose leaves implement simple kernel classifiers. Experiments suggest that our method reduces classification time significantly while maintaining performance. In addition, we propose several heuristics that also perform well.