Variational methods for inference and estimation in graphical models
Variational methods for inference and estimation in graphical models
The Journal of Machine Learning Research
A Variational Method for Learning Sparse and Overcomplete Representations
Neural Computation
Modeling the evolution of associated data
Data & Knowledge Engineering
Latent association analysis of document pairs
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Discovering different types of topics: factored topic models
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Topic models such as Latent Dirichlet Allocation (LDA) and Correlated Topic Model (CTM) have recently emerged as powerful statistical tools for text document modeling. In this paper, we improve upon CTM and propose Independent Factor Topic Models (IFTM) which use linear latent variable models to uncover the hidden sources of correlation between topics. There are 2 main contributions of this work. First, by using a sparse source prior model, we can directly visualize sparse patterns of topic correlations. Secondly, the conditional independence assumption implied in the use of latent source variables allows the objective function to factorize, leading to a fast Newton-Raphson based variational inference algorithm. Experimental results on synthetic and real data show that IFTM runs on average 3--5 times faster than CTM, while giving competitive performance as measured by perplexity and loglikelihood of held-out data.