Using semantic analysis to improve speech recognition performance

  • Authors:
  • Hakan Erdogan;Ruhi Sarikaya;Stanley F. Chen;Yuqing Gao;Michael Picheny

  • Affiliations:
  • Faculty of Engineering and Natural Sciences, Sabanci University, Orhanli Tuzla, 34956 Istanbul, Turkey;IBM TJ Watson Research Center, P.O. Box 218, Yorktown Heights, NY 10598, USA;IBM TJ Watson Research Center, P.O. Box 218, Yorktown Heights, NY 10598, USA;IBM TJ Watson Research Center, P.O. Box 218, Yorktown Heights, NY 10598, USA;IBM TJ Watson Research Center, P.O. Box 218, Yorktown Heights, NY 10598, USA

  • Venue:
  • Computer Speech and Language
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although syntactic structure has been used in recent work in language modeling, there has not been much effort in using semantic analysis for language models. In this study, we propose three new language modeling techniques that use semantic analysis for spoken dialog systems. We call these methods concept sequence modeling, two-level semantic-lexical modeling, and joint semantic-lexical modeling. These models combine lexical information with varying amounts of semantic information, using annotation supplied by either a shallow semantic parser or full hierarchical parser. These models also differ in how the lexical and semantic information is combined, ranging from simple interpolation to tight integration using maximum entropy modeling. We obtain improvements in recognition accuracy over word and class N-gram language models in three different task domains. Interpolation of the proposed models with class N-gram language models provides additional improvement in the air travel reservation domain. We show that as we increase the semantic information utilized and as we increase the tightness of integration between lexical and semantic items, we obtain improved performance when interpolating with class language models, indicating that the two types of models become more complementary in nature.