A multimodal real-time platform for studying human-avatar interactions

  • Authors:
  • Hui Zhang;Damian Fricker;Chen Yu

  • Affiliations:
  • Indiana University, Bloomington;Indiana University, Bloomington;Indiana University, Bloomington

  • Venue:
  • IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

A better understanding of the human user's expectations and sensitivities to the real-time behavior generated by virtual agents can provide insightful empirical data and infer useful principles to guide the design of intelligent virtual agents. In light of this, we propose and implement a research framework to systematically study and evaluate different important aspects of multimodal real-time interactions between humans and virtual agents. Our platform allows the virtual agent to keep track of the user's gaze and hand movements in real time, and adjust his own behaviors accordingly. Multimodal data streams are collected in human-avatar interactions including speech, eye gaze, hand and head movements from both the human user and the virtual agent, which are then used to discover fine-grained behavioral patterns in human-agent interactions. We present a pilot study based on the proposed framework as an example of the kinds of research questions that can be rigorously addressed and answered. This first study investigating human-agent joint attention reveals promising results about the role and functioning of joint attention in human-avatar interactions.