Combining hierarchical reinforcement learning and Bayesian networks for natural language generation in situated dialogue

  • Authors:
  • Nina Dethlefs;Heriberto Cuayáhuitl

  • Affiliations:
  • University of Bremen;German Research Centre for Artificial Intelligence (DFKI), Saarbrücken

  • Venue:
  • ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Language generators in situated domains face a number of content selection, utterance planning and surface realisation decisions, which can be strictly interdependent. We therefore propose to optimise these processes in a joint fashion using Hierarchical Reinforcement Learning. To this end, we induce a reward function for content selection and utterance planning from data using the PARADISE framework, and suggest a novel method for inducing a reward function for surface realisation from corpora. It is based on generation spaces represented as Bayesian Networks. Results in terms of task success and human-likeness suggest that our unified approach performs better than a baseline optimised in isolation or a greedy or random baseline. It receives human ratings close to human authors.