Evaluating discourse understanding in spoken dialogue systems

  • Authors:
  • Ryuichiro Higashinaka;Noboru Miyazaki;Mikio Nakano;Kiyoaki Aikawa

  • Affiliations:
  • NTT Communication Science Laboratories, NTT Corporation, Tokyo, Japan;NTT Communication Science Laboratories, NTT Corporation, Tokyo, Japan;NTT Communication Science Laboratories, NTT Corporation, Tokyo, Japan;NTT Communication Science Laboratories, NTT Corporation, Tokyo, Japan

  • Venue:
  • ACM Transactions on Speech and Language Processing (TSLP)
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This article describes a method for creating an evaluation measure for discourse understanding in spoken dialogue systems. No well-established measure has yet been proposed for evaluating discourse understanding, which has made it necessary to evaluate it only on the basis of the system's total performance. Such evaluations, however, are greatly influenced by task domains and dialogue strategies. To find a measure that enables good estimation of system performance only from discourse understanding results, we enumerated possible discourse-understanding-related metrics and calculated their correlation with the system's total performance through dialogue experiments.