MetaCost: a general method for making classifiers cost-sensitive
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
Learning and making decisions when costs and probabilities are both unknown
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Class Probability Estimation and Cost-Sensitive Classification Decisions
ECML '02 Proceedings of the 13th European Conference on Machine Learning
Transforming classifier scores into accurate multiclass probability estimates
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Cost-Sensitive Learning by Cost-Proportionate Example Weighting
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
Error limiting reductions between classification tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
The foundations of cost-sensitive learning
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Data Mining and Knowledge Discovery
PKDD 2007 Proceedings of the 11th European conference on Principles and Practice of Knowledge Discovery in Databases
Estimating the utility value of individual credit card delinquents
Expert Systems with Applications: An International Journal
Risk-Sensitive Learning via Minimization of Empirical Conditional Value-at-Risk
IEICE - Transactions on Information and Systems
Decision Support Systems
Journal of Systems and Software
Linguistic cost-sensitive learning of genetic fuzzy classifiers for imprecise data
International Journal of Approximate Reasoning
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
This paper presents a new formulation for cost-sensitive learning that we call the One-Benefit formulation. Instead of having the correct label for each training example as in the standard classifier learning formulation, in this formulation we have one possible label for each example (which may not be the correct one) and the benefit (or cost) associated with that label. The goal of learning in this formulation is to find the classifier that maximizes the expected benefit of the labelling using only these examples. We present a reduction from One-Benefit learning to standard classifier learning that allows us to use any existing error-minimizing classifier learner to maximize the expected benefit in this formulation by correctly weighting the examples. We also show how to evaluate a classifier using test examples for which we only the benefit for one of the labels. We present preliminary experimental results using a synthetic data generator that allows us to test both our learning method and our evaluation method.