Learning dialogue POMDP models from data

  • Authors:
  • Hamid R. Chinaei;Brahim Chaib-draa

  • Affiliations:
  • Computer Science and Software Engineering Department, Laval University, Quebec, Canada;Computer Science and Software Engineering Department, Laval University, Quebec, Canada

  • Venue:
  • Canadian AI'11 Proceedings of the 24th Canadian conference on Advances in artificial intelligence
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we learn the components of dialogue POMDP models from data. In particular, we learn the states, observations, as well as transition and observation functions based on a Bayesian latent topic model using unannotated human-human dialogues. As a matter of fact, we use the Bayesian latent topic model in order to learn the intentions behind user's utterances. Similar to recent dialogue POMDPs, we use the discovered user's intentions as the states of dialogue POMDPs. However, as opposed to previous works, instead of using some keywords as POMDP observations, we use some meta observations based on the learned user's intentions. As the number of meta observations is much less than the actual observations, i.e. the number of words in the dialogue set, the POMDP learning and planning becomes tractable. The experimental results on real dialogues show that the quality of the learned models increases by increasing the number of dialogues as training data. Moreover, the experiments based on simulation show that the introduced method is robust to the ASR noise level.