Machine Learning - Special issue on inductive transfer
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
Active + Semi-supervised Learning = Robust Multi-View Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Topic-Sensitive PageRank: A Context-Sensitive Ranking Algorithm for Web Search
IEEE Transactions on Knowledge and Data Engineering
A survey of kernels for structured data
ACM SIGKDD Explorations Newsletter
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Automatic multimedia cross-modal correlation discovery
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Improving SVM accuracy by training on auxiliary data sources
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Constructing informative priors using transfer learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Self-taught learning: transfer learning from unlabeled data
Proceedings of the 24th international conference on Machine learning
Random walk with restart: fast solutions and applications
Knowledge and Information Systems
An RKHS for multi-view learning and manifold co-regularization
Proceedings of the 25th international conference on Machine learning
IEEE Transactions on Knowledge and Data Engineering
Towards semantic knowledge propagation from text corpus to web images
Proceedings of the 20th international conference on World wide web
Hi-index | 0.00 |
This paper studies a new machine learning strategy called co-transfer learning. Unlike many previous learning problems, we focus on how to use labeled data of different feature spaces to enhance the classification of different learning spaces simultaneously. For instance, we make use of both labeled images and labeled text data to help learn models for classifying image data and text data together. An important component of co-transfer learning is to build different relations to link different feature spaces, thus knowledge can be co-transferred across different spaces. Our idea is to model the problem as a joint transition probability graph. The transition probabilities can be constructed by using the intra-relationships based on affinity metric among instances and the inter-relationships based on co-occurrence information among instances from different spaces. The proposed algorithm computes ranking of labels to indicate the importance of a set of labels to an instance by propagating the ranking score of labeled instances via the random walk with restart. The main contribution of this paper is to (i) propose a co-transfer learning (CT-Learn) framework that can perform learning simultaneously by co-transferring knowledge across different spaces; (ii) show the theoretical properties of the random walk for such joint transition probability graph so that the proposed learning model can be used effectively; (iii) develop an efficient algorithm to compute ranking scores and generate the possible labels for a given instance. Experimental results on benchmark data (image-text and English-Chinese-French classification data sets) have shown that the proposed algorithm is computationally efficient, and effective in learning across different spaces. In the comparison, we find that the classification performance of the CT-Learn algorithm is better than those of the other tested transfer learning algorithms.