Towards speech recognition without vocabulary-specific training

  • Authors:
  • Hsiao-Wuen Hon;Kai-Fu Lee;Robert Weide

  • Affiliations:
  • Carnegie Mellon University, Pittsburgh, PA;Carnegie Mellon University, Pittsburgh, PA;Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • HLT '89 Proceedings of the workshop on Speech and Natural Language
  • Year:
  • 1989

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the emergence of high-performance speaker-independent systems, a great barrier to man-machine interface has been overcome. This work describes our next step to improve the usability of speech recognizers—the use of vocabulary-independent (VI) models. If successful, VI models are trained once and for all. They will completely eliminate task-specific training, and will enable rapid configuration of speech recognizers for new vocabularies. Our initial results using generalized triphones as VI models show that with more training data and more detailed modeling, the error rate of VI models can be reduced substantially. For example, the error rates for VI models with 5,000, 10,000 and 15,000 training sentences are 23.9%, 15.2% and 13.3% respectively. Moreover, if task-specific training data were available, we can interpolate them with VI models. Our prelimenary results show that this interpolation can lead to an 18% error rate reduction over task-specific models.