Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
A novel method for detecting lips, eyes and faces in real time
Real-Time Imaging - Special issue on spectral imaging
Using KCCA for Japanese---English cross-language information retrieval and document classification
Journal of Intelligent Information Systems
IEEE Transactions on Knowledge and Data Engineering
Hi-index | 0.10 |
In multi-view learning, a classifier for different partitions (views) of the feature vector is commonly sought after. We consider the special case of surrogate supervision multi-view learning in which a classifier for one view is sought after, however, no labeled examples are available for that view. Instead, the training set consists of only labeled examples for the other view as well as unlabeled two-view data. While it is straightforward to train and test a classifier in the labeled view, it is challenging to perform the same task in the view where labels are unavailable. To solve this problem, we introduce an upper bound on the classical hinge loss (commonly used in support vector machines) that is well suited for the surrogate supervision multi-view learning setup. The bound only requires labeled examples from the other view and unlabeled examples of the two views. Using this approach, we introduce the surrogate supervision multi-class support vector machine (SSM-SVM). We evaluate the algorithm and compare it to other algorithms on a collection of datasets. We present an application of the algorithm to lip reading using audiovisual dataset.