Speech recognition using a stochastic language model integrating local and global constraints

  • Authors:
  • Ryosuke Isotani;Shoichi Matsunaga

  • Affiliations:
  • ATR Interpreting Telecommunications Research Laboratories, Seika-cho, Soraku-gun, Kyoto, Japan;ATR Interpreting Telecommunications Research Laboratories, Seika-cho, Soraku-gun, Kyoto, Japan

  • Venue:
  • HLT '94 Proceedings of the workshop on Human Language Technology
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a new stochastic language model that integrates local and global constraints effectively and describe a speech recognition system based on it. The proposed language model uses the dependencies within adjacent words as local constraints in the same way as conventional word N-gram models. To capture the global constraints between non-contiguous words, we take into account the sequence of the function words and that of the content words which are expected to represent, respectively, the syntactic and semantic relationships between words. Furthermore, we show that assuming an independence between local-and global constraints, the number of parameters to be estimated and stored is greatly reduced.The proposed language model is incorporated into a speech recognizer based on the time-synchronous Viterbi decoding algorithm, and compared with the word bigram model and trigram model. The proposed model gives a better recognition rate than the bigram model, though slightly worse than the trigram model, with only twice as many parameters as the bigram model.