Boosting a weak learning algorithm by majority
Information and Computation
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Support Vector Machines and the Bayes Rule in Classification
Data Mining and Knowledge Discovery
Text Categorization Based on Regularized Linear Classification Methods
Information Retrieval
On the influence of the kernel on the consistency of support vector machines
The Journal of Machine Learning Research
Sparseness of support vector machines
The Journal of Machine Learning Research
Sparseness vs Estimating Conditional Probabilities: Some Asymptotic Results
The Journal of Machine Learning Research
Hybrid huberized support vector machines for microarray classification
Proceedings of the 24th international conference on Machine learning
Hybrid huberized support vector machines for microarray classification
Proceedings of the 24th international conference on Machine learning
Semi-Supervised Learning
Consistency of support vector machines and other regularized kernel classifiers
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Support vector machines (SVMs) naturally embody sparseness due to their use of hinge loss functions. However, SVMs can not directly estimate conditional class probabilities. In this paper we propose and study a family of coherence functions, which are convex and differentiable, as surrogates of the hinge function. The coherence function is derived by using the maximum-entropy principle and is characterized by a temperature parameter. It bridges the hinge function and the logit function in logistic regression. The limit of the coherence function at zero temperature corresponds to the hinge function, and the limit of the minimizer of its expected error is the minimizer of the expected error of the hinge loss. We refer to the use of the coherence function in large-margin classification as "C-learning," and we present efficient coordinate descent algorithms for the training of regularized C-learning models.