Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Text Classification from Labeled and Unlabeled Documents using EM
Machine Learning - Special issue on information retrieval
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Transduction with confidence and credibility
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
Supervised learning is conventionally performed with pairwise input-output labeled data. After the training procedure, the adaptive system's weights are fixed while the testing procedure with unlabeled data is performed. Recently, in an attempt to improve classification performance unlabeled data has been exploited in the machine learning community. In this paper, we present an information theoretic learning (ITL) approach based on density divergence minimization to obtain an extended training algorithm using unlabeled data during the testing. The method uses a boosting-like algorithm with an ITL based cost function. Preliminary simulations suggest that the method has the potential to improve the performance of classifiers in the application phase.