A bootstrapping method for learning from heterogeneous data
FGIT'11 Proceedings of the Third international conference on Future Generation Information Technology
Exploiting unlabeled data to enhance ensemble diversity
Data Mining and Knowledge Discovery
A novel inductive semi-supervised SVM with graph-based self-training
IScIDE'12 Proceedings of the third Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering
Web page and image semi-supervised classification with heterogeneous information fusion
Journal of Information Science
Co-metric: a metric learning algorithm for data with multiple views
Frontiers of Computer Science: Selected Publications from Chinese Universities
Semi-supervised learning combining co-training with active learning
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
Co-training is one of the major semi-supervised learning paradigms that iteratively trains two classifiers on two different views, and uses the predictions of either classifier on the unlabeled examples to augment the training set of the other. During the co-training process, especially in initial rounds when the classifiers have only mediocre accuracy, it is quite possible that one classifier will receive labels on unlabeled examples erroneously predicted by the other classifier. Therefore, the performance of co-training style algorithms is usually unstable. In this paper, the problem of how to reliably communicate labeling information between different views is addressed by a novel co-training algorithm named CoTrade. In each labeling round, CoTrade carries out the label communication process in two steps. First, confidence of either classifier's predictions on unlabeled examples is explicitly estimated based on specific data editing techniques. Secondly, a number of predicted labels with higher confidence of either classifier are passed to the other one, where certain constraints are imposed to avoid introducing undesirable classification noise. Experiments on several real-world datasets across three domains show that CoTrade can effectively exploit unlabeled data to achieve better generalization performance.