Understanding RUTH: creating believable behaviors for a virtual human under uncertainty

  • Authors:
  • Insuk Oh;Matthew Stone

  • Affiliations:
  • Department of Computer Science Rutgers, The State University of New Jersey, Piscataway, NJ;Department of Computer Science Rutgers, The State University of New Jersey, Piscataway, NJ

  • Venue:
  • ICDHM'07 Proceedings of the 1st international conference on Digital human modeling
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In pursuing the ultimate goal of enabling intelligent conversation with a virtual human, two key challenges are selecting nonverbal behaviors to implement and realizing those behaviors practically and reliably. In this paper, we explore the signals interlocutors use to display uncertainty face to face. Peoples' signals were identified and annotated through systematic coding and then implemented onto our ECA (Embodied Conversational Agent), RUTH. We investigated whether RUTH animations were as effective as videos of talking people in conveying an agent's level of uncertainty to human viewers. Our results show that people could pick up on different levels of uncertainty not only with another conversational partner, but also with the simulations on RUTH. In addition, we used animations containing different subsets of facial signals to understand in more detail how nonverbal behavior conveys uncertainty. The findings illustrate the promise of our methodology for creating specific inventories of fine-grained conversational behaviors from knowledge and observations of spontaneous human conversation.