Information-based objective functions for active data selection
Neural Computation
Logistic regression with an auxiliary data source
ICML '05 Proceedings of the 22nd international conference on Machine learning
Boosting for transfer learning
Proceedings of the 24th international conference on Machine learning
Trust Region Newton Method for Logistic Regression
The Journal of Machine Learning Research
Actively Transfer Domain Knowledge
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
LIBLINEAR: A Library for Large Linear Classification
The Journal of Machine Learning Research
Domain adaptation with structural correspondence learning
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Transfer learning via multi-view principal component analysis
Journal of Computer Science and Technology - Special issue on natural language processing
Multi-source domain adaptation and its application to early detection of fatigue
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Active supervised domain adaptation
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Hi-index | 0.00 |
In sentiment classification, unlabeled user reviews are often free to collect for new products, while sentiment labels are rare. In this case, active learning is often applied to build a high-quality classifier with as small amount of labeled instances as possible. However, when the labeled instances are insufficient, the performance of active learning is limited. In this paper, we aim at enhancing active learning by employing the labeled reviews from a different but related (source) domain. We propose a framework Active Vector Rotation (AVR), which adaptively utilizes the source domain data in the active learning procedure. Thus, AVR gets benefits from source domain when it is helpful, and avoids the negative affects when it is harmful. Extensive experiments on toy data and review texts show our success, compared with other state-of-the-art active learning approaches, as well as approaches with domain adaptation.