Inverse kinematics positioning using nonlinear programming for highly articulated figures
ACM Transactions on Graphics (TOG)
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
GI '96 Proceedings of the conference on Graphics interface '96
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Messages embedded in gaze of interface agents --- impression management with agent's gaze
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Emotional speech: towards a new generation of databases
Speech Communication - Special issue on speech and emotion
Acquiring and validating motion qualities from live limb gestures
Graphical Models
Learning physics-based motion style with nonlinear inverse optimization
ACM SIGGRAPH 2005 Papers
ALMA: a layered model of affect
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Natural head motion synthesis driven by acoustic prosodic features: Virtual Humans and Social Agents
Computer Animation and Virtual Worlds - CASA 2005
The Relation between Gaze Behavior and the Attribution of Emotion: An Empirical Study
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Emotional gaze behavior generation in human-agent interaction
CHI '09 Extended Abstracts on Human Factors in Computing Systems
Real-time expressive gaze animation for virtual humans
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Glances, glares, and glowering: how should a virtual human express emotion through gaze?
Autonomous Agents and Multi-Agent Systems
Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections
ACM Transactions on Interactive Intelligent Systems (TiiS)
Emotional eye movement generation based on Geneva Emotion Wheel for virtual agents
Journal of Visual Languages and Computing
EEMML: the emotional eye movement animation toolkit
Multimedia Tools and Applications
Hi-index | 0.00 |
Currently, state of the art virtual agents lack the ability to display emotion as seen in actual humans, or even in hand-animated characters. One reason for the emotional inexpressiveness of virtual agents is the lack of emotionally expressive gaze manner. For virtual agents to express emotion that observers can empathize with, they need to generate gaze - including eye, head, and torso movement - to arbitrary targets, while displaying arbitrary emotional states. Our previous work [18] describes the Gaze Warping Transformation, a method of generating emotionally expressive head and torso movement during gaze shifts that is derived from human movement data. Through an evaluation, it was shown that applying different transformations to the same gaze shift could modify the affective state perceived when the transformed gaze shift was viewed by a human observer. In this paper we propose a model of realistic, emotionally expressive gaze that builds upon the Gaze Warping Transformation by improving the transformation implementation, and by adding a model of eye movement drawn from the visual neuroscience literature. We describe how to generate a gaze to an arbitrary target, while displaying an arbitrary emotional behavior. Finally, we propose an evaluation to determine what emotions human observers will attribute to the generated gaze shifts. Once this work is completed, virtual agents will have access to a new channel for emotionally expressive behavior.