Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
Automating the Construction of Internet Portals with Machine Learning
Information Retrieval
Adaptive View Validation: A First Step Towards Automatic View Detection
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
An RKHS for multi-view learning and manifold co-regularization
Proceedings of the 25th international conference on Machine learning
Analyzing Co-training Style Algorithms
ECML '07 Proceedings of the 18th European conference on Machine Learning
NUS-WIDE: a real-world web image database from National University of Singapore
Proceedings of the ACM International Conference on Image and Video Retrieval
Learning incoherent sparse and low-rank patterns from multiple tasks
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Linear Algorithms for Online Multitask Classification
The Journal of Machine Learning Research
A PAC-Style model for learning from labeled and unlabeled data
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Hi-index | 0.00 |
In many real-world applications, it is becoming common to have data extracted from multiple diverse sources, known as "multi-view" data. Multi-view learning (MVL) has been widely studied in many applications, but existing MVL methods learn a single task individually. In this paper, we study a new direction of multi-view learning where there are multiple related tasks with multi-view data (i.e. multi-view multi-task learning, or MVMT Learning). In our MVMT learning methods, we learn a linear mapping for each view in each task. In a single task, we use co-regularization to obtain functions that are in-agreement with each other on the unlabeled samples and achieve low classification errors on the labeled samples simultaneously. Cross different tasks, additional regularization functions are utilized to ensure the functions that we learn in each view are similar. We also developed two extensions of the MVMT learning algorithm. One extension handles missing views and the other handles non-uniformly related tasks. Experimental studies on three real-world data sets demonstrate that our MVMT methods significantly outperform the existing state-of-the-art methods.