Learning to control listening-oriented dialogue using partially observable markov decision processes

  • Authors:
  • Toyomi Meguro;Yasuhiro Minami;Ryuichiro Higashinaka;Kohji Dohsaka

  • Affiliations:
  • NTT Corporation, Tokyo, Japan;NTT Corporation, Tokyo, Japan;NTT Corporation, Tokyo, Japan;NTT Corporation, Tokyo, Japan

  • Venue:
  • ACM Transactions on Speech and Language Processing (TSLP)
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Our aim is to build listening agents that attentively listen to their users and satisfy their desire to speak and have themselves heard. This article investigates how to automatically create a dialogue control component of such a listening agent. We collected a large number of listening-oriented dialogues with their user satisfaction ratings and used them to create a dialogue control component that satisfies users by means of Partially Observable Markov Decision Processes (POMDPs). Using a hybrid dialog controller where high-level dialog acts are chosen with a statistical policy and low-level slot values are populated by a wizard, we evaluated our dialogue control method in a Wizard-of-Oz experiment. The experimental results show that our POMDP-based method achieves significantly higher user satisfaction than other stochastic models, confirming the validity of our approach. This article is the first to verify, by using human users, the usefulness of POMDP-based dialogue control for improving user satisfaction in nontask-oriented dialogue systems.