Improving SVM accuracy by training on auxiliary data sources
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Discriminative learning for differing training and test distributions
Proceedings of the 24th international conference on Machine learning
Boosting for transfer learning
Proceedings of the 24th international conference on Machine learning
Learning a meta-level prior for feature relevance from multiple related tasks
Proceedings of the 24th international conference on Machine learning
Self-taught learning: transfer learning from unlabeled data
Proceedings of the 24th international conference on Machine learning
Co-clustering based classification for out-of-domain documents
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Domain adaptation with structural correspondence learning
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
SMOTE: synthetic minority over-sampling technique
Journal of Artificial Intelligence Research
Domain adaptation for statistical classifiers
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
It is interesting and helpful to use the labeled data of some tasks to improve the classification performance of another task. This paper focuses on this issue and proposes an algorithm named SSDT (Synthetic Source Data Transfer). As the number of the training data influences the classification performance greatly, we create some synthetic training data using the source data and combine them with the target data to train a classifier. The classifier is applied to the target data, and experimental results show that SSDT improves the performance obviously.