Embodiment in conversational interfaces: Rea

  • Authors:
  • J. Cassell;T. Bickmore;M. Billinghurst;L. Campbell;K. Chang;H. Vilhjálmsson;H. Yan

  • Affiliations:
  • Gesture and Narrative Language Group, MIT Media Laboratory, E15-315, 20 Ames St, Cambridge, Massachusetts;Gesture and Narrative Language Group, MIT Media Laboratory, E15-315, 20 Ames St, Cambridge, Massachusetts;Gesture and Narrative Language Group, MIT Media Laboratory, E15-315, 20 Ames St, Cambridge, Massachusetts;Gesture and Narrative Language Group, MIT Media Laboratory, E15-315, 20 Ames St, Cambridge, Massachusetts;Gesture and Narrative Language Group, MIT Media Laboratory, E15-315, 20 Ames St, Cambridge, Massachusetts;Gesture and Narrative Language Group, MIT Media Laboratory, E15-315, 20 Ames St, Cambridge, Massachusetts;Gesture and Narrative Language Group, MIT Media Laboratory, E15-315, 20 Ames St, Cambridge, Massachusetts

  • Venue:
  • Proceedings of the SIGCHI conference on Human Factors in Computing Systems
  • Year:
  • 1999

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, we argue for embodied corrversational charactersas the logical extension of the metaphor of human - computerinteraction as a conversation. We argue that the only way to fullymodel the richness of human I&+ to-face communication is torely on conversational analysis that describes sets ofconversational behaviors as fi~lfilling conversational functions,both interactional and propositional. We demonstrate how toimplement this approach in Rea, an embodied conversational agentthat is capable of both multimodal input understanding and outputgeneration in a limited application domain. Rea supports bothsocial and task-oriented dialogue. We discuss issues that need tobe addressed in creating embodied conversational agents, anddescribe the architecture of the Rea interface.