Context dependent vector quantization for continuous speech recognition

  • Authors:
  • L. R. Bahl;P. V. de Souza;P. S. Gopalakrishnan;M. A. Picheny

  • Affiliations:
  • IBM T. J. Watson Research Center, Yorktown Heights, NY;Apple Computers Inc., Cupertino, CA and IBM T. J. Watson Research Center, Yorktown Heights, NY;IBM T. J. Watson Research Center, Yorktown Heights, NY;IBM T. J. Watson Research Center, Yorktown Heights, NY

  • Venue:
  • ICASSP'93 Proceedings of the 1993 IEEE international conference on Acoustics, speech, and signal processing: speech processing - Volume II
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many speech recognition systems today use a vector quantization stage in the initial processing of the speech signal to generate an integer value to represent a parameter vector derived from some short interval of speech. In most systems the vector quantizer is designed without regard to the phonetic context in which the vectors occur. However, it is well known that significant changes occur in the realization of many phones when they are uttered in the context of other phones. It would be worthwhile to use this information while designing and using the vector quantizer. In this paper we describe the design of a vector quantizer that takes into account the Variations in the vectors that results from the phonetic context in which the phones are realized. In this paper we will describe the procedure used to build the vector quantizer codebook and present experimental results.