Synthesizing animations of human manipulation tasks
ACM SIGGRAPH 2004 Papers
Automated Eye Motion Using Texture Synthesis
IEEE Computer Graphics and Applications
Emotionally Expressive Head and Body Movement During Gaze Shifts
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Hybrid image/model-based gaze-contingent rendering
ACM Transactions on Applied Perception (TAP)
Real-time expressive gaze animation for virtual humans
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Interactive motion modeling and parameterization by direct demonstration
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Real-time adaptive behaviors in multimodal human-avatar interactions
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Gesture variants and cognitive constraints for interactive virtual reality training systems
Proceedings of the 16th international conference on Intelligent user interfaces
Motion parameterization with inverse blending
MIG'10 Proceedings of the Third international conference on Motion in games
Hi-index | 0.00 |
Achieving autonomous virtual humans with coherent and natural motions is key for being effective in many educational, training and therapeutic applications. Among several aspects to be considered, the gaze behavior is an important non-verbal communication channel that plays a vital role in the effectiveness of the obtained animations. This paper focuses on analyzing gaze behavior in demonstrative tasks involving arbitrary locations for target objects and listeners. Our analysis is based on full-body motions captured from human participants performing real demonstrative tasks in varied situations. We address temporal information and coordination with targets and observers at varied positions.