Estimating the Posterior Probabilities Using the K-Nearest Neighbor Rule
Neural Computation
Cost-sensitive learning based on Bregman divergences
Machine Learning
Pattern classification with class probability output network
IEEE Transactions on Neural Networks
Maximum Likelihood in Cost-Sensitive Learning: Model Specification, Approximations, and Upper Bounds
The Journal of Machine Learning Research
DCPE co-training for classification
Neurocomputing
Hi-index | 0.00 |
Decision theory shows that the optimal decision is a function of the posterior class probabilities. More specifically, in binary classification, the optimal decision is based on the comparison of the posterior probabilities with some threshold. Therefore, the most accurate estimates of the posterior probabilities are required near these decision thresholds. This paper discusses the design of objective functions that provide more accurate estimates of the probability values, taking into account the characteristics of each decision problem. We propose learning algorithms based on the stochastic gradient minimization of these loss functions. We show that the performance of the classifier is improved when these algorithms behave like sample selectors: samples near the decision boundary are the most relevant during learning.