Topic Significance Ranking of LDA Generative Models

  • Authors:
  • Loulwah Alsumait;Daniel Barbará;James Gentle;Carlotta Domeniconi

  • Affiliations:
  • Department of Computer Science, George Mason University, Fairfax, USA 22030;Department of Computer Science, George Mason University, Fairfax, USA 22030;Department of Computational and Data Sciences, George Mason University, Fairfax, USA 22030;Department of Computer Science, George Mason University, Fairfax, USA 22030

  • Venue:
  • ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Topic models, like Latent Dirichlet Allocation (LDA), have been recently used to automatically generate text corpora topics, and to subdivide the corpus words among those topics. However, not all the estimated topics are of equal importance or correspond to genuine themes of the domain. Some of the topics can be a collection of irrelevant words, or represent insignificant themes. Current approaches to topic modeling perform manual examination to find meaningful topics. This paper presents the first automated unsupervised analysis of LDA models to identify junk topics from legitimate ones, and to rank the topic significance. Basically, the distance between a topic distribution and three definitions of "junk distribution" is computed using a variety of measures, from which an expressive figure of the topic significance is implemented using 4-phase Weighted Combination approach. Our experiments on synthetic and benchmark datasets show the effectiveness of the proposed approach in ranking the topic significance.