On the limited memory BFGS method for large scale optimization
Mathematical Programming: Series A and B
The nature of statistical learning theory
The nature of statistical learning theory
Text Classification from Labeled and Unlabeled Documents using EM
Machine Learning - Special issue on information retrieval
Concept decompositions for large sparse text data using clustering
Machine Learning
Learning and evaluating classifiers under sample selection bias
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Principled Hybrids of Generative and Discriminative Models
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Discriminative learning for differing training and test distributions
Proceedings of the 24th international conference on Machine learning
Semi-supervised classification with hybrid generative/discriminative methods
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Bias-Variance Tradeoff in Hybrid Generative-Discriminative Models
ICMLA '07 Proceedings of the Sixth International Conference on Machine Learning and Applications
An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators
Proceedings of the 25th international conference on Machine learning
Spectral domain-transfer learning
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Domain adaptation with structural correspondence learning
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Transferring naive bayes classifiers for text classification
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Exponential family hybrid semi-supervised learning
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Domain adaptation via transfer component analysis
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Knowledge transfer on hybrid graph
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
An empirical study of semi-supervised structured conditional models for dependency parsing
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 - Volume 2
Domain adaptive bootstrapping for named entity recognition
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3
IEEE Transactions on Knowledge and Data Engineering
Word Sense Disambiguation by Combining Labeled Data Expansion and Semi-Supervised Learning Method
ACM Transactions on Asian Language Information Processing (TALIP)
Hi-index | 0.00 |
The transfer learning problem of designing good classifiers with a high generalization ability by using labeled samples whose distribution is different from that of test samples is an important and challenging research issue in the fields of machine learning and data mining. This paper focuses on designing a semi-supervised classifier trained by using unlabeled samples drawn by the same distribution as test samples, and presents a semi-supervised classification method to deal with the transfer learning problem, based on a hybrid discriminative and generative model. Although JESS-CM is one of the most successful semi-supervised classifier design frameworks and has achieved the best published results in NLP tasks, it has an overfitting problem in transfer learning settings that we consider in this paper. We expect the overfitting problem to be mitigated with the proposed method, which utilizes both labeled and unlabeled samples for the discriminative training of classifiers. We also present a refined objective that formalizes the training algorithm and classifier form. Our experimental results for text classification using three typical benchmark test collections confirmed that the proposed method outperformed the JESS-CM framework with most transfer learning settings.