Reward shaping for statistical optimisation of dialogue management

  • Authors:
  • Layla El Asri;Romain Laroche;Olivier Pietquin

  • Affiliations:
  • Orange Labs, Issy-les-Moulineaux, France,IMS-MaLIS Research Group, UMI 2958 (CNRS - GeorgiaTech), SUPELEC Metz Campus, Metz, France;Orange Labs, Issy-les-Moulineaux, France;IMS-MaLIS Research Group, UMI 2958 (CNRS - GeorgiaTech), SUPELEC Metz Campus, Metz, France

  • Venue:
  • SLSP'13 Proceedings of the First international conference on Statistical Language and Speech Processing
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper investigates the impact of reward shaping on a reinforcement learning-based spoken dialogue system's learning. A diffuse reward function gives a reward after each transition between two dialogue states. A sparse function only gives a reward at the end of the dialogue. Reward shaping consists of learning a diffuse function without modifying the optimal policy compared to a sparse one. Two reward shaping methods are applied to a corpus of dialogues evaluated with numerical performance scores. Learning with these functions is compared to the sparse case and it is shown, on simulated dialogues, that the policies learnt after reward shaping lead to higher performance.