Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Text Classification from Labeled and Unlabeled Documents using EM
Machine Learning - Special issue on information retrieval
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
High-performing feature selection for text classification
Proceedings of the eleventh international conference on Information and knowledge management
Active + Semi-supervised Learning = Robust Multi-View Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Hi-index | 0.00 |
In this paper, Multi-View Expectation and Maximization (EM) algorithm for finite mixture models is proposed by us to handle real-world learning problems which have natural feature splits. Multi-View EM does feature split as Co-training and Co-EM, but it considers multi-view learning problems in the EM framework. The proposed algorithm has these impressing advantages comparing with other algorithms in Co-training setting: its convergence is theoretically guaranteed; it can easily deal with more two views learning problems. Experiments on WebKB data demonstrated that Multi-View EM performed satisfactorily well compared with Co-EM, Co-training and standard EM.