Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Learning to extract symbolic knowledge from the World Wide Web
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Learning to remove Internet advertisements
Proceedings of the third annual conference on Autonomous Agents
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Template Co-update in Multimodal Biometric Systems
ICB '07 Proceedings of the international conference on Advances in Biometrics
Semi-supervised learning with very few labeled training examples
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Semi-Supervised Learning
Measuring statistical dependence with hilbert-schmidt norms
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
Large number of applications involving multiple views of data are coming into use, e.g., reporting news on the Internet by both text and video, identifying a person by both fingerprints and face images, etc. Meanwhile, labeling these data needs expensive efforts and thus most data are left unlabeled in many applications. Co-training can exploit the information of unlabeled data in multi-view scenarios. However, the assumptions of co-training, i.e., sufficient and redundant are so strong to be held in most situations. It is notable that different views often have different discrimination ability, while views with strong discrimination ability are usually hard to be obtained. As a consequence, it is a promising way to exploit unlabeled multi-view training data to integrate the information of the strong view into the weak view so that the weak view's discrimination ability can get improved. Only classifiers trained on the weak view will be used to do the classification tasks afterwards. In this paper, based on dependence maximization, we propose a framework to inject the information of strong views into weak ones. Experiments show that the framework outperforms co-training in improving the performances of classifiers trained on the weak view.