Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Statistical methods for speech recognition
Statistical methods for speech recognition
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Comparing Simple Recurrent Networks and n-Grams in a Large Corpus
Applied Intelligence
A neural probabilistic language model
The Journal of Machine Learning Research
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
Probabilistic top-down parsing and language modeling
Computational Linguistics
Estimation of stochastic context-free grammars and their use as language models
Computer Speech and Language
New directions in connectionist language modeling
IWANN'03 Proceedings of the Artificial and natural neural networks 7th international conference on Computational methods in neural modeling - Volume 1
A Maximum Likelihood Approach to Continuous Speech Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
In language engineering, language models are employed in order to improve system performance. These language models are usually N-gram models which are estimated from large text databases using the occurrence frequencies of these N-grams. An alternative to conventional frequency-based estimation of N-gram probabilities consists on using neural networks to this end. In this paper, an approach to language modeling with a hybrid language model is presented as a linear combination of a connectionist N-gram model, which is used to represent the global relations between certain linguistic categories, and a stochastic model of word distribution into such categories. The hybrid language model is tested on the corpus of the Wall Street journal processed in the Penn Treebank project.