CoTrade: Confident Co-Training With Data Editing

  • Authors:
  • Min-Ling Zhang;Zhi-Hua Zhou

  • Affiliations:
  • School of Computer Science and Engineering, Southeast University, Nanjing, China;National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Co-training is one of the major semi-supervised learning paradigms that iteratively trains two classifiers on two different views, and uses the predictions of either classifier on the unlabeled examples to augment the training set of the other. During the co-training process, especially in initial rounds when the classifiers have only mediocre accuracy, it is quite possible that one classifier will receive labels on unlabeled examples erroneously predicted by the other classifier. Therefore, the performance of co-training style algorithms is usually unstable. In this paper, the problem of how to reliably communicate labeling information between different views is addressed by a novel co-training algorithm named CoTrade. In each labeling round, CoTrade carries out the label communication process in two steps. First, confidence of either classifier's predictions on unlabeled examples is explicitly estimated based on specific data editing techniques. Secondly, a number of predicted labels with higher confidence of either classifier are passed to the other one, where certain constraints are imposed to avoid introducing undesirable classification noise. Experiments on several real-world datasets across three domains show that CoTrade can effectively exploit unlabeled data to achieve better generalization performance.