Empirical comparisons of various discriminative language models for speech recognition
ROCLING '11 Proceedings of the 23rd Conference on Computational Linguistics and Speech Processing
Hi-index | 0.00 |
Discriminative language modeling (DLM) attempts to improve speech recognition performance by reranking the recognition hypotheses output from a baseline system. Most of the existing DLM methods assume that the reranking task can be treated as a linear discrimination problem and all testing utterances share the same parameter vector for reranking of hypotheses. However, the latter assumption sometimes results in a trained DLM model with weak generalizability and unsatisfactory performance. In view of this problem, we hence propose a relevance-based DLM (RDLM) framework that can efficiently infer the DLM model parameters of each testing utterance on-the-fly for better recognition performance. The structures and characteristics of the RDLM framework are extensively investigated, while the performance is thoroughly analyzed and verified by comparison with the existing DLM methods.