Messages embedded in gaze of interface agents --- impression management with agent's gaze
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Where to Look? Automating Attending Behaviors of Virtual Human Characters
Autonomous Agents and Multi-Agent Systems
Automated Eye Motion Using Texture Synthesis
IEEE Computer Graphics and Applications
Eye communication in a conversational 3D synthetic agent
AI Communications
Head movement control in visually guided tasks: Postural goal and optimality
Computers in Biology and Medicine
Automated authoring of quality human motion for interactive environments
Automated authoring of quality human motion for interactive environments
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The Rickel Gaze Model: A Window on the Mind of a Virtual Human
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Animating Idle Gaze in Public Places
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
The Expressive Gaze Model: Using Gaze to Express Emotion
IEEE Computer Graphics and Applications
Designing effective gaze mechanisms for virtual agents
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Controllable models of gaze behavior for virtual agents and humanlike robots
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
We present a parametric, computational model of head-eye coordination that can be used in the animation of directed gaze shifts for virtual characters. The model is based on research in human neurophysiology. It incorporates control parameters that allow for adapting gaze shifts to the characteristics of the environment, the gaze targets, and the idiosyncratic behavioral attributes of the virtual character. A user study confirms that the model communicates gaze targets as effectively as real humans do, while being preferred subjectively to state-of-the-art models.