SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
GI '96 Proceedings of the conference on Graphics interface '96
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Automated Eye Motion Using Texture Synthesis
IEEE Computer Graphics and Applications
Style translation for human motion
ACM SIGGRAPH 2005 Papers
ALMA: a layered model of affect
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
A model of attention and interest using Gaze behavior
Lecture Notes in Computer Science
A model of gaze for the purpose of emotional expression in virtual embodied agents
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
The intensity of perceived emotions in 3D virtual humans
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 3
Scrutinizing Natural Scenes: Controlling the Gaze of an Embodied Conversational Agent
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
From Rational to Emotional Agents: A Way to Design Emotional Agents
From Rational to Emotional Agents: A Way to Design Emotional Agents
Virtual body language: the history and future of avatars: How nonverbal expression is evolving on the internet
Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections
ACM Transactions on Interactive Intelligent Systems (TiiS)
Performatology: a procedural acting approach for interactive drama in cinematic games
ICIDS'11 Proceedings of the 4th international conference on Interactive Digital Storytelling
Hi-index | 0.00 |
Gaze is an extremely powerful expressive signal that is used for many purposes, from expressing emotion to regulating human interaction. The use of gaze as a signal has been exploited to strong effect in hand-animated characters, greatly enhancing the believability of the character's simulated life. However, virtual humans animated in real-time have been less successful at using expressive gaze. One reason for this is that we lack a model of expressive gaze in virtual humans. A gaze shift towards any specific target can be performed in many different ways, using many different expressive manners of gaze, each of which can potentially imply a different emotional or cognitive internal state. However, there is currently no mapping that describes how a user will attribute these internal states to a virtual character performing a gaze shift in a particular manner. In this paper, we begin to address this by providing the results of an empirical study that explores the mapping between an observer's attribution of emotional state to gaze. The purpose of this mapping is to allow for an interactive virtual human to generate believable gaze shifts that a user will attribute a desired emotional state to. We have generated a set of animations by composing low-level gaze attributes culled from the nonverbal behavior literature. Then, subjects judged the animations displaying these attributes. While the results do not provide a complete mapping between gaze and emotion, they do provide a basis for a generative model of expressive gaze.