Statistical methods for speech recognition
Statistical methods for speech recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Supervised and unsupervised PCFG adaptation to novel domains
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Learning from Multiple Sources
The Journal of Machine Learning Research
Domain adaptation for statistical classifiers
Journal of Artificial Intelligence Research
Learning and Domain Adaptation
DS '09 Proceedings of the 12th International Conference on Discovery Science
Learning and domain adaptation
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Lexicon based sentiment analysis of Urdu text using SentiUnits
MICAI'10 Proceedings of the 9th Mexican international conference on Advances in artificial intelligence: Part I
Multi-view transfer learning with a large margin approach
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Selective sampling and active learning from single and multiple teachers
The Journal of Machine Learning Research
Hi-index | 0.00 |
This paper presents a novel theoretical study of the general problem of multiple source adaptation using the notion of Rényi divergence. Our results build on our previous work [12], but significantly broaden the scope of that work in several directions. We extend previous multiple source loss guarantees based on distribution weighted combinations to arbitrary target distributions P, not necessarily mixtures of the source distributions, analyze both known and unknown target distribution cases, and prove a lower bound. We further extend our bounds to deal with the case where the learner receives an approximate distribution for each source instead of the exact one, and show that similar loss guarantees can be achieved depending on the divergence between the approximate and true distributions. We also analyze the case where the labeling functions of the source domains are somewhat different. Finally, we report the results of experiments with both an artificial data set and a sentiment analysis task, showing the performance benefits of the distribution weighted combinations and the quality of our bounds based on the Rényi divergence.