A Novel Contrast Co-learning Framework for Generating High Quality Training Data

  • Authors:
  • Zeyu Zheng;Jun Yan;Shuicheng Yan;Ning Liu;Zheng Chen;Ming Zhang

  • Affiliations:
  • -;-;-;-;-;-

  • Venue:
  • ICDM '10 Proceedings of the 2010 IEEE International Conference on Data Mining
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The good performances of most classical learning algorithms are generally founded on high quality training data, which are clean and unbiased. The availability of such data is however becoming much harder than ever in many real world problems due to the difficulties in collecting large scale unbiased data and precisely labeling them for training. In this paper, we propose a general Contrast Co-learning (CCL) framework to refine the biased and noisy training data when an unbiased yet unlabeled data pool is available. CCL starts with multiple sets of probably biased and noisy training data and trains a set of classifiers individually. Then under the assumption that the confidently classified data samples may have higher probabilities to be correctly classified, CCL iteratively and automatically filtering out possible data noises as well as adding those confidently classified samples from the unlabeled data pool to correct the bias. Through this process, we can generate a cleaner and unbiased training dataset with theoretical guarantees. Extensive experiments on two public text datasets clearly show that CCL consistently improves the algorithmic classification performance on biased and noisy training data compared with several state-of-the-art classical algorithms.