Head orientation and gaze direction in meetings
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Fast Radial Symmetry for Detecting Points of Interest
IEEE Transactions on Pattern Analysis and Machine Intelligence
Conversing with the user based on eye-gaze patterns
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Recognizing gaze aversion gestures in embodied conversational discourse
Proceedings of the 8th international conference on Multimodal interfaces
Eye Tracking Methodology: Theory and Practice
Eye Tracking Methodology: Theory and Practice
Proceedings of the 9th international conference on Multimodal interfaces
Real-time Visual Tracker by Stream Processing
Journal of Signal Processing Systems
Automatic nonverbal analysis of social interaction in small groups: A review
Image and Vision Computing
HCII'11 Proceedings of the 1st international conference on Human interface and the management of information: interacting with information - Volume Part II
Toward multimodal situated analysis
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Investigating the midline effect for visual focus of attention recognition
Proceedings of the 14th ACM international conference on Multimodal interaction
Leveraging the robot dialog state for visual focus of attention recognition
Proceedings of the 15th ACM on International conference on multimodal interaction
A semi-automated system for accurate gaze coding in natural dyadic interactions
Proceedings of the 15th ACM on International conference on multimodal interaction
3D head pose and gaze tracking and their application to diverse multimodal tasks
Proceedings of the 15th ACM on International conference on multimodal interaction
Context aware addressee estimation for human robot interaction
Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction
Hi-index | 0.00 |
This paper presents a probabilistic framework, which incorporates automatic image-based gaze detection, for inferring the structure of multiparty face-to-face conversations. This framework aims to infer conversation regimes and gaze patterns from the nonverbal behaviors of meeting participants, which are captured from image and audio streams with cameras and microphones. The conversation regime corresponds to a global conversational pattern such as monologue and dialogue, and the gaze pattern indicates "who is looking at whom". Input nonverbal behaviors include presence/absence of utterances, head directions, and discrete head-centered eye-gaze directions. In contrast to conventional meeting analysis methods that focus only on the participant's head pose as a surrogate of visual focus of attention, this paper newly incorporates vision-based gaze detection combined with head pose tracking into a probabilistic conversation model based on dynamic Bayesian network. Our gaze detector is able to differentiate 3 to 5 different eye gaze directions, e.g. left, straight and right. Experiments on four-person conversations confirm the power of the proposed framework in identifying conversation structure and in estimating gaze patterns with higher accuracy then previous models.