Exploiting structure to efficiently solve large scale partially observable markov decision processes
Exploiting structure to efficiently solve large scale partially observable markov decision processes
Machine Learning
Partially observable Markov decision processes for spoken dialog systems
Computer Speech and Language
Sound and efficient inference with probabilistic and deterministic dependencies
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Agenda-based user simulation for bootstrapping a POMDP dialogue system
NAACL-Short '07 Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers
The Hidden Information State model: A practical framework for POMDP-based spoken dialogue management
Computer Speech and Language
Anytime point-based approximations for large POMDPs
Journal of Artificial Intelligence Research
First order decision diagrams for relational MDPs
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Dialog in the open world: platform and applications
Proceedings of the 2009 international conference on Multimodal interfaces
Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems
Computer Speech and Language
Hi-index | 0.00 |
Open-ended spoken interactions are typically characterised by both structural complexity and high levels of uncertainty, making dialogue management in such settings a particularly challenging problem. Traditional approaches have focused on providing theoretical accounts for either the uncertainty or the complexity of spoken dialogue, but rarely considered the two issues simultaneously. This paper describes ongoing work on a new approach to dialogue management which attempts to fill this gap. We represent the interaction as a Partially Observable Markov Decision Process (POMDP) over a rich state space incorporating both dialogue, user, and environment models. The tractability of the resulting POMDP can be preserved using a mechanism for dynamically constraining the action space based on prior knowledge over locally relevant dialogue structures. These constraints are encoded in a small set of general rules expressed as a Markov Logic network. The first-order expressivity of Markov Logic enables us to leverage the rich relational structure of the problem and efficiently abstract over large regions of the state and action spaces.