Investigating task performance of probabilistic topic models: an empirical study of PLSA and LDA

  • Authors:
  • Yue Lu;Qiaozhu Mei;Chengxiang Zhai

  • Affiliations:
  • Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, USA 61801;School of Information, University of Michigan, Ann Arbor, USA 48109;Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, USA 61801

  • Venue:
  • Information Retrieval
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Probabilistic topic models have recently attracted much attention because of their successful applications in many text mining tasks such as retrieval, summarization, categorization, and clustering. Although many existing studies have reported promising performance of these topic models, none of the work has systematically investigated the task performance of topic models; as a result, some critical questions that may affect the performance of all applications of topic models are mostly unanswered, particularly how to choose between competing models, how multiple local maxima affect task performance, and how to set parameters in topic models. In this paper, we address these questions by conducting a systematic investigation of two representative probabilistic topic models, probabilistic latent semantic analysis (PLSA) and Latent Dirichlet Allocation (LDA), using three representative text mining tasks, including document clustering, text categorization, and ad-hoc retrieval. The analysis of our experimental results provides deeper understanding of topic models and many useful insights about how to optimize the performance of topic models for these typical tasks. The task-based evaluation framework is generalizable to other topic models in the family of either PLSA or LDA.