Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Improving Automatic Query Classification via Semi-Supervised Learning
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Constructing informative priors using transfer learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Proceedings of the 17th international conference on World Wide Web
Knowledge transfer via multiple model local structure mapping
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
EigenTransfer: a unified framework for transfer learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Domain adaptation from multiple sources via auxiliary classifiers
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Relaxed Transfer of Different Classes via Spectral Partition
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Cross-domain sentiment classification via spectral feature alignment
Proceedings of the 19th international conference on World wide web
IEEE Transactions on Knowledge and Data Engineering
Subkilometer crater discovery with boosting and transfer learning
ACM Transactions on Intelligent Systems and Technology (TIST)
Cross-Lingual Adaptation Using Structural Correspondence Learning
ACM Transactions on Intelligent Systems and Technology (TIST)
Hi-index | 0.00 |
Transfer learning addresses the problems that labeled training data are insufficient to produce a high-performance model. Typically, given a target learning task, most transfer learning approaches require to select one or more auxiliary tasks as sources by the designers. However, how to select the right source data to enable effective knowledge transfer automatically is still an unsolved problem, which limits the applicability of transfer learning. In this paper, we take one step ahead and propose a novel transfer learning framework, known as source-selection-free transfer learning (SSFTL), to free users from the need to select source domains. Instead of asking the users for source and target data pairs, as traditional transfer learning does, SSFTL turns to some online information sources such as World Wide Web or the Wikipedia for help. The source data for transfer learning can be hidden somewhere within this large online information source, but the users do not know where they are. Based on the online information sources, we train a large number of classifiers. Then, given a target task, a bridge is built for labels of the potential source candidates and the target domain data in SSFTL via some large online social media with tag cloud as a label translator. An added advantage of SSFTL is that, unlike many previous transfer learning approaches, which are difficult to scale up to the Web scale, SSFTL is highly scalable and can offset much of the training work to offline stage. We demonstrate the effectiveness and efficiency of SSFTL through extensive experiments on several real-world datasets in text classification.