A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Machine Learning - Special issue on inductive transfer
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Learning to learn with the informative vector machine
ICML '04 Proceedings of the twenty-first international conference on Machine learning
A high-performance semi-supervised learning method for text chunking
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Boosting for transfer learning
Proceedings of the 24th international conference on Machine learning
Self-taught learning: transfer learning from unlabeled data
Proceedings of the 24th international conference on Machine learning
Domain adaptation with structural correspondence learning
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Mapping and revising Markov logic networks for transfer learning
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Transfer learning from minimal target data by mapping across relational domains
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
IEEE Transactions on Knowledge and Data Engineering
Hi-index | 0.10 |
Instance-based transfer is an important paradigm for transfer learning, where data from related tasks (source data) are combined with the data for the current learning task (target data) to train a learner for the current (target) task. However, in most application scenarios, the benefit of the source data is unclear. The source may contain both helpful and harmful instances to the target learning. Simply combining the source with the target data may result in performance deterioration (negative transfer). Selecting the instances from the source data that will benefit the target task is a key step for instance-based transfer learning. Most existing instance-based transfer methods lack such selection or mix source selection with the training for the target task. This leads to problems as the training may use source data harmful to the target. We propose a simple yet effective method for instance-based transfer learning in environments where the usefulness of the sources are unclear. The method employs a double-selection process, based on bootstrapping, to reduce the impact of irrelevant/harmful data in the source. Experiment results show that in most cases, our method produces more improvements through transfer than TrBagg (Kamishima et al., 2009) and TrAdaBoost (Dai et al., 2009). Our method can also deal with a wider range of transfer learning scenarios.