Cost-Sensitive self-training

  • Authors:
  • Yuanyuan Guo;Harry Zhang;Bruce Spencer

  • Affiliations:
  • Faculty of Computer Science, University of New Brunswick, Fredericton, NB, Canada;Faculty of Computer Science, University of New Brunswick, Fredericton, NB, Canada;National Research Council Canada, Fredericton, NB, Canada

  • Venue:
  • Canadian AI'12 Proceedings of the 25th Canadian conference on Advances in Artificial Intelligence
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In some real-world applications, it is time-consuming or expensive to collect much labeled data, while unlabeled data is easier to obtain. Many semi-supervised learning methods have been proposed to deal with this problem by utilizing the unlabeled data. On the other hand, on some datasets, misclassifying different classes causes different costs, which challenges the common assumption in classification that classes have the same misclassification cost. For example, misclassifying a fraud as a legitimate transaction could be more serious than misclassifying a legitimate transaction as fraudulent. In this paper, we propose a cost-sensitive self-training method (CS-ST) to improve the performance of Naive Bayes when labeled instances are scarce and different misclassification errors are associated with different costs. CS-ST incorporates the misclassification costs into the learning process of self-training, and approximately estimates the misclassification error to help select unlabeled instances. Experiments on 13 UCI datasets and three text datasets show that, in terms of the total misclassification cost and the number of correctly classified instances with higher costs, CS-ST has better performance than the self-training method and the base classifier learned from the original labeled data only.