The nature of statistical learning theory
The nature of statistical learning theory
Reducing the run-time complexity in support vector machines
Advances in kernel methods
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Introduction to Algorithms
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Text classification using string kernels
The Journal of Machine Learning Research
Exact simplification of support vector solutions
The Journal of Machine Learning Research
Some greedy learning algorithms for sparse regression and classification with mercer kernels
The Journal of Machine Learning Research
A support vector method for multivariate performance measures
ICML '05 Proceedings of the 22nd international conference on Machine learning
Fast and space efficient string kernels using suffix arrays
ICML '06 Proceedings of the 23rd international conference on Machine learning
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
A Direct Method for Building Sparse Kernel Learning Algorithms
The Journal of Machine Learning Research
Sparse kernel SVMs via cutting-plane training
Machine Learning
Early exit optimizations for additive machine learned ranking systems
Proceedings of the third ACM international conference on Web search and data mining
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Reducing SVM classification time using multiple mirror classifiers
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
The pre-image problem in kernel methods
IEEE Transactions on Neural Networks
Example-dependent basis vector selection for kernel-based classifiers
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
Hi-index | 0.00 |
Kernel based methods, such as nonlinear support vector machines, have a high classification accuracy in many applications. But classification using these methods can be slow if the kernel function is complex and if it has to be evaluated many times. Existing solutions to this problem try to find a representation of the decision surface in terms of only a few basis vectors, so that only a small number of kernel evaluations is needed. However, in all of these methods the set of basis vectors used is independent of the example to be classified. In this paper we propose to adaptively select a small number of basis vectors given an unseen example. The set of basis vectors is thus not fixed, but it depends on the input to the classifier. Our approach is to first learn a non-sparse kernel machine using some existing techique, and then using training data to find a function that maps unseen examples to subsets of the basis vectors used by theis kernel machine. We propose to represent this function as a binary tree, called a support vector tree, and devise a greedy algorithm for finding good trees. In the experiments we observe that the proposed approach outperforms existing techniques in a number of cases.