Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Self-organized language modeling for speech recognition
Readings in speech recognition
Speech recognition with dynamic bayesian networks
Speech recognition with dynamic bayesian networks
Natural statistical models for automatic speech recognition
Natural statistical models for automatic speech recognition
Dynamic bayesian networks: representation, inference and learning
Dynamic bayesian networks: representation, inference and learning
Probabilistic top-down parsing and language modeling
Computational Linguistics
Exploiting syntactic structure for language modeling
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Immediate-head parsing for language models
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Dynamic Bayesian networks for audio-visual speech recognition
EURASIP Journal on Applied Signal Processing
Hi-index | 0.00 |
Although n-gram models are still the de facto standard in language modeling for speech recognition, it has been shown that more sophisticated models achieve better accuracy by taking additional information, such as syntactic rules, semantic relations or domain knowledge into account Unfortunately, most of the effort in developing such models goes into the implementation of handcrafted inference routines What lacks is a generic mechanism to introduce background knowledge into a language model We propose the use of dynamic Bayesian networks for this purpose Dynamic Bayesian networks can be seen as a generalization of the n-gram models and hmms traditionally used in language modeling and speech recognition Whereas those models use a single random variable to represent state, Bayesian networks can have any number of variables As such they are particularly well-suited for the construction of models that take additional information into account In this paper language modeling with belief networks is discussed Examples of belief network implementations of well-known language models are given and a new model is presented that models dependencies between the content words in a sentence.