An Approach to Estimate Perplexity Values for Language Models Based on Phrase Classes
IbPRIA '09 Proceedings of the 4th Iberian Conference on Pattern Recognition and Image Analysis
Integrating history-length interpolation and classes in language modeling
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Using finite state models for the integration of hierarchical LMs into ASR systems
MCPR'11 Proceedings of the Third Mexican conference on Pattern recognition
Computational Linguistics
Stochastic K-TSS bi-languages for machine translation
FSMNLP '11 Proceedings of the 9th International Workshop on Finite State Methods and Natural Language Processing
Hi-index | 0.00 |
In this work, we propose and compare two different approaches to a two-level language model. Both of them are based on phrase classes but they consider different ways of dealing with phrases into the classes. We provide a complete formulation consistent with the two approaches. The language models proposed were integrated into an Automatic Speech Recognition (ASR) system and evaluated in terms of Word Error Rate. Several series of experiments were carried out over a spontaneous human–machine dialogue corpus in Spanish, where users asked for information about long-distance trains by telephone. It can be extracted from the obtained results that the integration of phrases into classes when using the language models proposed leads to an improvement of the performance of an ASR system. Moreover, the obtained results seem to indicate that the history length with which the best performance is achieved is related to the features of the model itself. Thus, not all the models show the best results with the same value of history length.