Frustratingly easy semi-supervised domain adaptation
DANLP 2010 Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Logistic regression for transductive transfer learning from multiple sources
ADMA'10 Proceedings of the 6th international conference on Advanced data mining and applications - Volume Part II
Language models as representations for weakly-supervised NLP tasks
CoNLL '11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning
Multi-domain active learning for text classification
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Confidence-weighted linear classification for text categorization
The Journal of Machine Learning Research
Structural and topical dimensions in multi-task patent translation
EACL '12 Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
Multi-domain learning: when do domains matter?
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Triplex transfer learning: exploiting both shared and distinct concepts for text classification
Proceedings of the sixth ACM international conference on Web search and data mining
Cross-media sentiment classification and application to box-office forecasting
Proceedings of the 10th Conference on Open Research Areas in Information Retrieval
Online portfolio selection: A survey
ACM Computing Surveys (CSUR)
Concept learning for cross-domain text classification: a general probabilistic framework
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
State-of-the-art statistical NLP systems for a variety of tasks learn from labeled training data that is often domain specific. However, there may be multiple domains or sources of interest on which the system must perform. For example, a spam filtering system must give high quality predictions for many users, each of whom receives emails from different sources and may make slightly different decisions about what is or is not spam. Rather than learning separate models for each domain, we explore systems that learn across multiple domains. We develop a new multi-domain online learning framework based on parameter combination from multiple classifiers. Our algorithms draw from multi-task learning and domain adaptation to adapt multiple source domain classifiers to a new target domain, learn across multiple similar domains, and learn across a large number of disparate domains. We evaluate our algorithms on two popular NLP domain adaptation tasks: sentiment classification and spam filtering.