Improving topic evaluation using conceptual knowledge

  • Authors:
  • Claudiu Cristian Musat;Julien Velcin;Stefan Trausan-Matu;Marian-Andrei Rizoiu

  • Affiliations:
  • Computer Science Department, "Politehnica" University of Bucharest, Romania;ERIC Laboratoire, Université Lumière, Lyon 2, France;Computer Science Department, "Politehnica" University of Bucharest, Romania;ERIC Laboratoire, Université Lumière, Lyon 2, France

  • Venue:
  • IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The growing number of statistical topic models led to the need to better evaluate their output. Traditional evaluation means estimate the model's fitness to unseen data. It has recently been proven than the output of human judgment can greatly differ from these measures. Thus the need for methods that better emulate human judgment is stringent. In this paper we present a system that computes the conceptual relevance of individual topics from a given model on the basis of information drawn from a given concept hierarchy, in this case WordNet. The notion of conceptual relevance is regarded as the ability to attribute a concept to each topic and separate words related to the topic from the unrelated ones based on that concept. In multiple experiments we prove the correlation between the automatic evaluation method and the answers received from human evaluators, for various corpora and difficulty levels. By changing the evaluation focus from a statistical one to a conceptual one we were able to detect which topics are conceptually meaningful and rank them accordingly.