Interactive control of avatars animated with human motion data
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Predicting human interruptibility with sensors: a Wizard of Oz feasibility study
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Analyzing and predicting focus of attention in remote collaborative tasks
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
The Rickel Gaze Model: A Window on the Mind of a Virtual Human
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Predicting Listener Backchannels: A Probabilistic Multimodal Approach
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
IGaze: Studying Reactive Gaze Behavior in Semi-immersive Human-Avatar Interactions
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Estimating User's Conversational Engagement Based on Gaze Behaviors
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Visual data mining of multimedia data for social and behavioral studies
Information Visualization
Investigating multimodal real-time patterns of joint attention in an hri word learning task
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Modeling embodied feedback with virtual humans
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
Modeling multimodal communication as a complex system
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Modeling gaze behavior for virtual demonstrators
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Adaptive eye gaze patterns in interactions with human and artificial agents
ACM Transactions on Interactive Intelligent Systems (TiiS)
Hi-index | 0.00 |
Multimodal interaction in everyday life seems so effortless. However, a closer look reveals that such interaction is indeed complex and comprises multiple levels of coordination, from high-level linguistic exchanges to low-level couplings of momentary bodily movements both within an agent and across multiple interacting agents. A better understanding of how these multimodal behaviors are coordinated can provide insightful principles to guide the development of intelligent multimodal interfaces. In light of this, we propose and implement a research framework in which human participants interact with a virtual agent in a virtual environment. Our platform allows the virtual agent to keep track of the user's gaze and hand movements in real time, and adjust his own behaviors accordingly. An experiment is designed and conducted to investigate adaptive user behaviors in a human-agent joint attention task. Multimodal data streams are collected in the study including speech, eye gaze, hand and head movements from both the human user and the virtual agent, which are then analyzed to discover various behavioral patterns. Those patterns show that human participants are highly sensitive to momentary multimodal behaviors generated by the virtual agent and they rapidly adapt their behaviors accordingly. Our results suggest the importance of studying and understanding real-time adaptive behaviors in human-computer multimodal interactions.