The latent words language model

  • Authors:
  • Koen Deschacht;Jan De Belder;Marie-Francine Moens

  • Affiliations:
  • K.U.Leuven, Department of Computer Science, Celestijnenlaan 200A, B-3001 Heverlee, Belgium;K.U.Leuven, Department of Computer Science, Celestijnenlaan 200A, B-3001 Heverlee, Belgium;K.U.Leuven, Department of Computer Science, Celestijnenlaan 200A, B-3001 Heverlee, Belgium

  • Venue:
  • Computer Speech and Language
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new generative model of natural language, the latent words language model. This model uses a latent variable for every word in a text that represents synonyms or related words in the given context. We develop novel methods to train this model and to find the expected value of these latent variables for a given unseen text. The learned word similarities help to reduce the sparseness problems of traditional n-gram language models. We show that the model significantly outperforms interpolated Kneser-Ney smoothing and class-based language models on three different corpora. Furthermore the latent variables are useful features for information extraction. We show that both for semantic role labeling and word sense disambiguation, the performance of a supervised classifier increases when incorporating these variables as extra features. This improvement is especially large when using only a small annotated corpus for training.