Speech dialogue with facial displays: multimodal human-computer conversation

  • Authors:
  • Katashi Nagao;Akikazu Takeuchi

  • Affiliations:
  • Sony Computer Science Laboratory Inc., Higashi-gotanda, Shinagawa-ku, Tokyo, Japan;Sony Computer Science Laboratory Inc., Higashi-gotanda, Shinagawa-ku, Tokyo, Japan

  • Venue:
  • ACL '94 Proceedings of the 32nd annual meeting on Association for Computational Linguistics
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human face-to-face conversation is an ideal model for human-computer dialogue. One of the major features of face-to-face communication is its multiplicity of communication channels that act on multiple modalities. To realize a natural multimodal dialogue, it is necessary to study how humans perceive information and determine the information to which humans are sensitive. A face is an independent communication channel that conveys emotional and conversational signals, encoded as facial expressions. We have developed an experimental system that integrates speech dialogue and facial animation, to investigate the effect of introducing communicative facial expressions as a new modality in human-computer conversation. Our experiments have showen that facial expressions are helpful, especially upon first contact with the system. We have also discovered that featuring facial expressions at an early stage improves subsequent interaction.