Partially observable Markov decision processes for spoken dialog systems
Computer Speech and Language
Spoken language interaction with model uncertainty: an adaptive human-robot interaction system
Connection Science - Language and Robots
Point-based value iteration: an anytime algorithm for POMDPs
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Using learned PSR model for planning under uncertainty
AI'10 Proceedings of the 23rd Canadian conference on Advances in Artificial Intelligence
Learning observation models for dialogue POMDPs
Canadian AI'12 Proceedings of the 25th Canadian conference on Advances in Artificial Intelligence
Hi-index | 0.00 |
In this paper, we learn the components of dialogue POMDP models from data. In particular, we learn the states, observations, as well as transition and observation functions based on a Bayesian latent topic model using unannotated human-human dialogues. As a matter of fact, we use the Bayesian latent topic model in order to learn the intentions behind user's utterances. Similar to recent dialogue POMDPs, we use the discovered user's intentions as the states of dialogue POMDPs. However, as opposed to previous works, instead of using some keywords as POMDP observations, we use some meta observations based on the learned user's intentions. As the number of meta observations is much less than the actual observations, i.e. the number of words in the dialogue set, the POMDP learning and planning becomes tractable. The experimental results on real dialogues show that the quality of the learned models increases by increasing the number of dialogues as training data. Moreover, the experiments based on simulation show that the introduced method is robust to the ASR noise level.