Everything old is new again: a fresh look at historical approaches in machine learning
Everything old is new again: a fresh look at historical approaches in machine learning
Reducing multiclass to binary: a unifying approach for margin classifiers
The Journal of Machine Learning Research
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
In Defense of One-Vs-All Classification
The Journal of Machine Learning Research
Core Vector Machines: Fast SVM Training on Very Large Data Sets
The Journal of Machine Learning Research
Large Margin Methods for Structured and Interdependent Output Variables
The Journal of Machine Learning Research
Fast Kernel Classifiers with Online and Active Learning
The Journal of Machine Learning Research
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Building Support Vector Machines with Reduced Classifier Complexity
The Journal of Machine Learning Research
Multiclass multiple kernel learning
Proceedings of the 24th international conference on Machine learning
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Kernel methods and the exponential family
Neurocomputing
Learning bounds for support vector machines with learned kernels
COLT'06 Proceedings of the 19th annual conference on Learning Theory
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Efficient classifiers for multi-class classification problems
Decision Support Systems
Hi-index | 0.10 |
We develop a new multiclass classification method that reduces the multiclass problem to a single binary classifier (SBC). Our method constructs the binary problem by embedding smaller binary problems into a single space. A good embedding will allow for large margin classification. We show that the construction of such an embedding can be reduced to the task of learning linear combinations of kernels. We provide a bound on the generalization error of the multiclass classifier obtained with our construction and outline the conditions for its consistency. Our empirical examination of the new method indicates that it outperforms one-vs.-all, all-pairs and the error-correcting output coding scheme at least when the number of classes is small.