Machine Learning - Special issue on inductive transfer
The Journal of Machine Learning Research
Pachinko allocation: DAG-structured mixture models of topic correlations
ICML '06 Proceedings of the 23rd international conference on Machine learning
The dynamic hierarchical Dirichlet process
Proceedings of the 25th international conference on Machine learning
Evolutionary hierarchical dirichlet processes for multiple correlated time-varying corpora
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
PRIB'10 Proceedings of the 5th IAPR international conference on Pattern recognition in bioinformatics
Modelling sequential text with an adaptive topic model
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Hi-index | 0.01 |
In many domains data items are represented by vectors of counts; count data arises, for example, in bioinformatics or analysis of text documents represented as word count vectors. However, often the amount of data available from an interesting data source is too small to model the data source well. When several data sets are available from related sources, exploiting their similarities by transfer learning can improve the resulting models compared to modeling sources independently. We introduce a Bayesian generative transfer learning model which represents similarity across document collections by sparse sharing of latent topics controlled by an Indian buffet process. Unlike a prominent previous model, hierarchical Dirichlet process (HDP) based multi-task learning, our model decouples topic sharing probability from topic strength, making sharing of low-strength topics easier. In experiments, our model outperforms the HDP approach both on synthetic data and in first of the two case studies on text collections, and achieves similar performance as the HDP approach in the second case study.