The BBN BYBLOS Continuous Speech Recognition system
HLT '89 Proceedings of the workshop on Speech and Natural Language
A dynamic language model for speech recognition
HLT '91 Proceedings of the workshop on Speech and Natural Language
Hi-index | 0.00 |
This paper reports on a series of experiments in which the Hidden Markov Model baseforms and the language model probabilities were updated from spontaneously dictated speech captured during recognition sessions with the IBM Tangora system. The basic technique for baseform modification consisted of constructing new fenonic baseforms for all recognized words. To modify the language model probabilities, a simplified version of a cache language model was implemented. The word error rate across six talkers was 3.7%. Baseform adaptation reduced the average error rate to 3.5%, and employing the cache language model reduced the error rate to 3.2%. Combining both techniques further reduced the error rate to 3.1% - a respectable improvement over the original error rate, especially given that the system was speaker-trained prior to adaptation.