ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Variables Contributing to the Coordination of Rapid Eye/Head Gaze Shifts
Biological Cybernetics
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Recognizing visual focus of attention from head pose in natural meetings
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
Learning large margin likelihoods for realtime head pose tracking
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Facilitating multiparty dialog with gaze, gesture, and speech
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Conversation scene analysis based on dynamic Bayesian network and image-based gaze detection
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Leveraging the robot dialog state for visual focus of attention recognition
Proceedings of the 15th ACM on International conference on multimodal interaction
3D head pose and gaze tracking and their application to diverse multimodal tasks
Proceedings of the 15th ACM on International conference on multimodal interaction
Context aware addressee estimation for human robot interaction
Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction
Hi-index | 0.00 |
This paper addresses the recognition of people's visual focus of attention (VFOA), the discrete version of gaze indicating who is looking at whom or what. In absence of high definition images, we rely on people's head pose to recognize the VFOA. To the contrary of most previous works that assumed a fixed mapping between head pose directions and gaze target directions, we investigate novel gaze models documented in psychovision that produce a dynamic (temporal) mapping between them. This mapping accounts for two important factors affecting the head and gaze relationship: the shoulder orientation defining the gaze midline of a person varies over time; and gaze shifts from frontal to the side involve different head rotations than the reverse. Evaluated on a public dataset and on data recorded with the humanoid robot Nao, the method exhibit better adaptivity often producing better performance than state-of-the-art approach.