A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Exploiting unlabeled data in ensemble methods
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Boosting for transfer learning
Proceedings of the 24th international conference on Machine learning
Self-taught learning: transfer learning from unlabeled data
Proceedings of the 24th international conference on Machine learning
Dataset Shift in Machine Learning
Dataset Shift in Machine Learning
Domain adaptation via transfer component analysis
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Extending Semi-supervised Learning Methods for Inductive Transfer Learning
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
IEEE Transactions on Knowledge and Data Engineering
Hi-index | 0.00 |
The main goal of transfer learning is to reuse related domain data to learn models for the target domain. In existing instance-transfer learning algorithms, the relevance of instances is estimated mainly according to small amount of labeled instances, and the generalization ability of these algorithms needs to be improved. To make the relevance estimation more reliable, we propose to use unlabeled target domain instances as additional training data. These instances would serve as new domain knowledge sources to help determining the relevance of related domain instances. Under the universal framework of boosting, we introduce local smoothness regularizer, and obtain new empirical loss function, where unlabeled instances are included. Gradient decent method is used to iteratively optimize the loss function, and we finally obtain a new instance-transfer learning algorithm. Experiment results on text datasets show that the new algorithm outperforms competitive algorithms.