Class-based n-gram models of natural language
Computational Linguistics
A neural probabilistic language model
The Journal of Machine Learning Research
A neural syntactic language model
A neural syntactic language model
Training neural network language models on very large corpora
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Continuous space language models
Computer Speech and Language
Advances in speech transcription at IBM under the DARPA EARS program
IEEE Transactions on Audio, Speech, and Language Processing
On Growing and Pruning Kneser–Ney Smoothed -Gram Models
IEEE Transactions on Audio, Speech, and Language Processing
Deep neural network language models
WLM '12 Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT
Hi-index | 0.00 |
Neural network language models (NNLMs) have achieved very good performance in large-vocabulary continuous speech recognition (LVCSR) systems. Because decoding with NNLMs is computationally expensive, there is interest in developing methods to approximate NNLMs with simpler language models that are suitable for fast decoding. In this work, we propose an approximate method for converting a feedforward NNLM into a back-off n-gram language model that can be used directly in existing LVCSR decoders. We convert NNLMs of increasing order to pruned back-off language models, using lower-order models to constrain the n-grams allowed in higher-order models. In experiments on Broadcast News data, we find that the resulting back-off models retain the bulk of the gain achieved by NNLMs over conventional n-gram language models, and give accuracy improvements as compared to existing methods for converting NNLMs to back-off models. In addition, the proposed approach can be applied to any type of non-back-off language model to enable efficient decoding.