Short comings of latent models in supervised settings

  • Authors:
  • Vijay Krishnan

  • Affiliations:
  • IIT Bombay, Mumbai, India

  • Venue:
  • Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Aspect Model [1, 2] and the Latent Dirichlet Allocation Model [3, 4] are latent generative models proposed with the objective of modeling discrete data such as text. Though it is not explicitly published (to the best of our knowledge), it is reasonably well known in there search community that the Aspect Model does not perform very well in supervised settings and also that latent models are frequently not identifiable, i.e. their optimal parameters are not unique.In this paper, we make a much stronger claim about the pitfalls of commonly-used latent models. By constructing a small, synthetic, but by no means unrealistic corpus, we show that latent models have inherent limitations that prevent them from recovering semantically meaningful parameters from data generated from a reasonable generative distribution. In fact, our experiments with supervised classification using the Aspect Model, showed that its performance was rather poor, even worse than Naive Bayes, leading us to the synthetic study.We also analyze the scenario of using tempered EM and show that it would not plug the above shortcomings. Our analysis suggests that there is also some scope for improvement in the Latent Dirichlet Allocation Model(LDA) [3, 4]. We then use our insight into the shortcomings of these models, to come up with a promising variant of the LDA, that does not suffer from the aforesaid drawbacks. This could potentially lead to much better performance and model fit, in the supervised scenario.