Towards relational POMDPs for adaptive dialogue management

  • Authors:
  • Pierre Lison

  • Affiliations:
  • German Research Centre for Artificial Intelligence (DFKI GmbH), Saarbrücken, Germany

  • Venue:
  • ACLstudent '10 Proceedings of the ACL 2010 Student Research Workshop
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Open-ended spoken interactions are typically characterised by both structural complexity and high levels of uncertainty, making dialogue management in such settings a particularly challenging problem. Traditional approaches have focused on providing theoretical accounts for either the uncertainty or the complexity of spoken dialogue, but rarely considered the two issues simultaneously. This paper describes ongoing work on a new approach to dialogue management which attempts to fill this gap. We represent the interaction as a Partially Observable Markov Decision Process (POMDP) over a rich state space incorporating both dialogue, user, and environment models. The tractability of the resulting POMDP can be preserved using a mechanism for dynamically constraining the action space based on prior knowledge over locally relevant dialogue structures. These constraints are encoded in a small set of general rules expressed as a Markov Logic network. The first-order expressivity of Markov Logic enables us to leverage the rich relational structure of the problem and efficiently abstract over large regions of the state and action spaces.