Affective computing
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Xface: MPEG-4 based open source toolkit for 3D Facial Animation
Proceedings of the working conference on Advanced visual interfaces
Describing and generating multimodal contents featuring affective lifelike agents with MPML
New Generation Computing
Parameterized facial expression synthesis based on MPEG-4
EURASIP Journal on Applied Signal Processing
A model of gaze for the purpose of emotional expression in virtual embodied agents
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Emotionally Expressive Head and Body Movement During Gaze Shifts
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Extending MPML3D to Second Life
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
An Extension of MPML with Emotion Recognition Functions Attached
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Providing expressive eye movement to virtual agents
Proceedings of the 2009 international conference on Multimodal interfaces
Visual attention and eye gaze during multiparty conversations with distractions
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Towards a common framework for multimodal generation: the behavior markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
Eye movement plays an important role in face to face communication in that it conveys nonverbal information and emotional intent beyond speech. Being "a window to the mind", the eye and its behavior are tightly coupled with human cognitive processes. In this paper, we proposed an Emotional Eye Movement Markup Language (EEMML) which is an emotional eye movement animation scripting tool that enables authors to describe and generate emotional eye movement in virtual agents. The language can describe eye movement parameters we derived from facial expression database as well as real-time eye movement data (pupil size, blink rate and saccade). EEMML provides the input for our eye movement generator system with one or more eye movement actions in sequence. The language is extensible, so that new rules can be quickly added. It is designed to plug into larger human-agent or agent-agent interaction systems. We present an evaluation in which subjects evaluated the EEMML and gave their feedback. The results indicate the validity of our approach.