Dynamic bayesian networks for language modeling

  • Authors:
  • Pascal Wiggers;Leon J. M. Rothkrantz

  • Affiliations:
  • Man-Machine Interaction Group, Delft University of Technology, Delft, The Netherlands;Man-Machine Interaction Group, Delft University of Technology, Delft, The Netherlands

  • Venue:
  • TSD'06 Proceedings of the 9th international conference on Text, Speech and Dialogue
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although n-gram models are still the de facto standard in language modeling for speech recognition, it has been shown that more sophisticated models achieve better accuracy by taking additional information, such as syntactic rules, semantic relations or domain knowledge into account Unfortunately, most of the effort in developing such models goes into the implementation of handcrafted inference routines What lacks is a generic mechanism to introduce background knowledge into a language model We propose the use of dynamic Bayesian networks for this purpose Dynamic Bayesian networks can be seen as a generalization of the n-gram models and hmms traditionally used in language modeling and speech recognition Whereas those models use a single random variable to represent state, Bayesian networks can have any number of variables As such they are particularly well-suited for the construction of models that take additional information into account In this paper language modeling with belief networks is discussed Examples of belief network implementations of well-known language models are given and a new model is presented that models dependencies between the content words in a sentence.