A dynamic language model for speech recognition
HLT '91 Proceedings of the workshop on Speech and Natural Language
Training set issues in SRI's DECIPHER speech recognition system
HLT '90 Proceedings of the workshop on Speech and Natural Language
Elements of information theory
Elements of information theory
Training and scaling preference functions for disambiguation
Computational Linguistics
Generalized probabilistic LR parsing of natural language (Corpora) with unification-based grammars
Computational Linguistics - Special issue on using large corpora: I
Distributional clustering of English words
ACL '93 Proceedings of the 31st annual meeting on Association for Computational Linguistics
A hybrid approach to adaptive statistical language modeling
HLT '94 Proceedings of the workshop on Human Language Technology
Language modeling with sentence-level mixtures
HLT '94 Proceedings of the workshop on Human Language Technology
Combining knowledge sources to reorder N-best speech hypothesis lists
HLT '94 Proceedings of the workshop on Human Language Technology
Text segmentation with multiple surface linguistic cues
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
Bilingual Cluster Based Models for Statistical Machine Translation
IEICE - Transactions on Information and Systems
Dynamic model interpolation for statistical machine translation
StatMT '08 Proceedings of the Third Workshop on Statistical Machine Translation
Hi-index | 0.00 |
Many of the kinds of language model used in speech understanding suffer from imperfect modeling of intra-sentential contextual influences. I argue that this problem can be addressed by clustering the sentences in a training corpus automatically into subcorpora on the criterion of entropy reduction, and calculating separate language model parameters for each cluster. This kind of clustering offers a way to represent important contextual effects and can therefore significantly improve the performance of a model. It also offers a reasonably automatic means to gather evidence on whether a more complex, context-sensitive model using the same general kind of linguistic information is likely to reward the effort that would be required to develop it: if clustering improves the performance of a model, this proves the existence of further context dependencies, not exploited by the unclustered model. As evidence for these claims, I present results showing that clustering improves some models but not others for the ATIS domain. These results are consistent with other findings for such models, suggesting that the existence or otherwise of an improvement brought about by clustering is indeed a good pointer to whether it is worth developing further the unclustered model.