The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Making large-scale support vector machine learning practical
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
Support vector machine learning for interdependent and structured output spaces
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Improving Multiclass Pattern Recognition by the Combination of Two Strategies
IEEE Transactions on Pattern Analysis and Machine Intelligence
Generalized Bradley-Terry Models and Multi-Class Probability Estimates
The Journal of Machine Learning Research
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
No free lunch theorems for optimization
IEEE Transactions on Evolutionary Computation
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper we consider multiclass learning tasks based on Support Vector Machines (SVMs). In this regard, currently used methods are One-Against-Allor One-Against-One, but there is much need for improvements in the field of multiclass learning. We developed a novel combination algorithm called Comb-ECOC, which is based on posterior class probabilities. It assigns, according to the Bayesian rule, the respective instance to the class with the highest posterior probability. A problem with the usage of a multiclass method is the proper choice of parameters. Many users only take the default parameters of the respective learning algorithms (e.g. the regularization parameter Cand the kernel parameter 茂戮驴). We tested different parameter optimization methods on different learning algorithms and confirmed the better performance of One-Against-Oneversus One-Against-All, which can be explained by the maximum margin approach of SVMs.