Gaussian processes for fast policy optimisation of POMDP-based dialogue managers

  • Authors:
  • M. Gašić;F. Jurčíček;S. Keizer;F. Mairesse;B. Thomson;K. Yu;S. Young

  • Affiliations:
  • Cambridge University, Cambridge, UK;Cambridge University, Cambridge, UK;Cambridge University, Cambridge, UK;Cambridge University, Cambridge, UK;Cambridge University, Cambridge, UK;Cambridge University, Cambridge, UK;Cambridge University, Cambridge, UK

  • Venue:
  • SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

Modelling dialogue as a Partially Observable Markov Decision Process (POMDP) enables a dialogue policy robust to speech understanding errors to be learnt. However, a major challenge in POMDP policy learning is to maintain tractability, so the use of approximation is inevitable. We propose applying Gaussian Processes in Reinforcement learning of optimal POMDP dialogue policies, in order (1) to make the learning process faster and (2) to obtain an estimate of the uncertainty of the approximation. We first demonstrate the idea on a simple voice mail dialogue task and then apply this method to a real-world tourist information dialogue task.