Robust dialogue-state dependent language modeling using leaving-one-out

  • Authors:
  • F. Wessel;A. Baader

  • Affiliations:
  • Lehrstuhl fur Inf., Tech. Hochschule Aachen, Germany;-

  • Venue:
  • ICASSP '99 Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 02
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

The use of dialogue-state dependent language models in automatic inquiry systems can improve speech recognition and understanding if a reasonable prediction of the dialogue state is feasible. In this paper, the dialogue state is defined as the set of parameters which are contained in the system prompt. For each dialogue state a separate language model is constructed. In order to obtain robust language models despite the small amount of training data we propose to interpolate all of the dialogue-state dependent language models linearly for each dialogue state and to train the large number of resulting interpolation weights with the EM-algorithm in combination with leaving-one-out. We present experimental results on a small Dutch corpus which has been recorded in the Netherlands with a train timetable information system and show that the perplexity and the word error rate can be reduced significantly.