Where to look? Automating attending behaviors of virtual human characters
Proceedings of the third annual conference on Autonomous Agents
Designing Sociable Robots
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Detection of Glasses in Facial Images
ACCV '98 Proceedings of the Third Asian Conference on Computer Vision-Volume II
Bottom-Up Visual Attention for Virtual Human Animation
CASA '03 Proceedings of the 16th International Conference on Computer Animation and Social Agents (CASA 2003)
The cog project: building a humanoid robot
Computation for metaphors, analogy, and agents
Visual attention and eye gaze during multiparty conversations with distractions
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Embodied conversational agents: computing and rendering realistic gaze patterns
PCM'06 Proceedings of the 7th Pacific Rim conference on Advances in Multimedia Information Processing
The Relation between Gaze Behavior and the Attribution of Emotion: An Empirical Study
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Glances, glares, and glowering: how should a virtual human express emotion through gaze?
Autonomous Agents and Multi-Agent Systems
Facilitating multiparty dialog with gaze, gesture, and speech
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Hi-index | 0.00 |
We present here a system for controlling the eye gaze of a virtual embodied conversational agent able to perceive the physical environment in which it interacts. This system is inspired by known components of human visual attention system and reproduces its limitations in terms of visual acuity, sensitivity to movement, limitations of short-memory and object pursuit. The aim of this coupling between animation and visual scene analysis is to provide sense of presence and mutual attention to human interlocutors. After a brief introduction to this research project and a focused state of the art, we detail the components of our system and confront simulation results to eye gaze data collected from viewers observing the same natural scenes.