A robust logistic discrimination model
Statistics and Computing
A Comparative Study of Cost-Sensitive Boosting Algorithms
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Support Vector Machines: Training and Applications
Support Vector Machines: Training and Applications
Convex Optimization
A Fast Dual Algorithm for Kernel Logistic Regression
Machine Learning
Trading convexity for scalability
ICML '06 Proceedings of the 23rd international conference on Machine learning
Training a Support Vector Machine in the Primal
Neural Computation
A Direct Method for Building Sparse Kernel Learning Algorithms
The Journal of Machine Learning Research
Comments on the "Core Vector Machines: Fast SVM Training on Very Large Data Sets"
The Journal of Machine Learning Research
Gini Support Vector Machine: Quadratic Entropy Based Robust Multi-Class Probability Regression
The Journal of Machine Learning Research
SMOTE: synthetic minority over-sampling technique
Journal of Artificial Intelligence Research
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The scores returned by support vector machines are often used as a confidence measures in the classification of new examples. However, there is no theoretical argument sustaining this practice. Thus, when classification uncertainty has to be assessed, it is safer to resort to classifiers estimating conditional probabilities of class labels. Here, we focus on the ambiguity in the vicinity of the boundary decision. We propose an adaptation of maximum likelihood estimation, instantiated on logistic regression. The model outputs proper conditional probabilities into a user-defined interval and is less precise elsewhere. The model is also sparse, in the sense that few examples contribute to the solution. The computational efficiency is thus improved compared to logistic regression. Furthermore, preliminary experiments show improvements over standard logistic regression and performances similar to support vector machines.