A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Support vector machines, reproducing kernel Hilbert spaces, and randomized GACV
Advances in kernel methods
Convergence of a block coordinate descent method for nondifferentiable minimization
Journal of Optimization Theory and Applications
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
Probability Estimates for Multi-class Classification by Pairwise Coupling
The Journal of Machine Learning Research
Statistical Analysis of Some Multi-Category Large Margin Classification Methods
The Journal of Machine Learning Research
On the Consistency of Multiclass Classification Methods
The Journal of Machine Learning Research
Multicategory composite least squares classifiers
Statistical Analysis and Data Mining
Hi-index | 0.00 |
Hard and soft classifiers are two important groups of techniques for classification problems. Logistic regression and Support Vector Machines are typical examples of soft and hard classifiers respectively. The essential difference between these two groups is whether one needs to estimate the class conditional probability for the classification task or not. In particular, soft classifiers predict the label based on the obtained class conditional probabilities, while hard classifiers bypass the estimation of probabilities and focus on the decision boundary. In practice, for the goal of accurate classification, it is unclear which one to use in a given situation. To tackle this problem, the Large-margin Unified Machine (LUM) was recently proposed as a unified family to embrace both groups. The LUM family enables one to study the behavior change from soft to hard binary classifiers. For multicategory cases, however, the concept of soft and hard classification becomes less clear. In that case, class probability estimation becomes more involved as it requires estimation of a probability vector. In this paper, we propose a new Multicategory LUM (MLUM) framework to investigate the behavior of soft versus hard classification under multicategory settings. Our theoretical and numerical results help to shed some light on the nature of multicategory classification and its transition behavior from soft to hard classifiers. The numerical results suggest that the proposed tuned MLUM yields very competitive performance.