Hierarchical reinforcement learning for adaptive text generation

  • Authors:
  • Nina Dethlefs;Heriberto Cuayáhuitl

  • Affiliations:
  • University of Bremen, Germany;University of Bremen, Germany

  • Venue:
  • INLG '10 Proceedings of the 6th International Natural Language Generation Conference
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a novel approach to natural language generation (NLG) that applies hierarchical reinforcement learning to text generation in the wayfinding domain. Our approach aims to optimise the integration of NLG tasks that are inherently different in nature, such as decisions of content selection, text structure, user modelling, referring expression generation (REG), and surface realisation. It also aims to capture existing interdependencies between these areas. We apply hierarchical reinforcement learning to learn a generation policy that captures these interdependencies, and that can be transferred to other NLG tasks. Our experimental results---in a simulated environment---show that the learnt wayfinding policy outperforms a baseline policy that takes reasonable actions but without optimization.