Reinforcement learning of question-answering dialogue policies for virtual museum guides

  • Authors:
  • Teruhisa Misu;Kallirroi Georgila;Anton Leuski;David Traum

  • Affiliations:
  • National Institute of Information and Communications Technology (NICT), Kyoto, Japan;USC Institute for Creative Technologies, Playa Vista, CA;USC Institute for Creative Technologies, Playa Vista, CA;USC Institute for Creative Technologies, Playa Vista, CA

  • Venue:
  • SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We use Reinforcement Learning (RL) to learn question-answering dialogue policies for a real-world application. We analyze a corpus of interactions of museum visitors with two virtual characters that serve as guides at the Museum of Science in Boston, in order to build a realistic model of user behavior when interacting with these characters. A simulated user is built based on this model and used for learning the dialogue policy of the virtual characters using RL. Our learned policy outperforms two baselines (including the original dialogue policy that was used for collecting the corpus) in a simulation setting.