Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Head orientation and gaze direction in meetings
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Distributed meetings: a meeting capture and broadcasting system
Proceedings of the tenth ACM international conference on Multimedia
An Algorithm for Real-Time Stereo Vision Implementation of Head Pose and Gaze Direction Measurement
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
A free-head, simple calibration, gaze tracking system that enables gaze-based interaction
Proceedings of the 2004 symposium on Eye tracking research & applications
Impact of video editing based on participants' gaze in multiparty conversation
CHI '04 Extended Abstracts on Human Factors in Computing Systems
Modeling Individual and Group Actions in Meetings: A Two-Layer HMM Framework
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 7 - Volume 07
Automatic Analysis of Multimodal Group Actions in Meetings
IEEE Transactions on Pattern Analysis and Machine Intelligence
Adaptive view-based appearance models
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Blind restoration of linearly degraded discrete signals by Gibbssampling
IEEE Transactions on Signal Processing
Modeling focus of attention for meeting indexing based on multiple cues
IEEE Transactions on Neural Networks
Quantifying interpersonal influence in face-to-face conversations based on visual attention patterns
CHI '06 Extended Abstracts on Human Factors in Computing Systems
Proceedings of the 8th international conference on Multimodal interfaces
Tracking the multi person wandering visual focus of attention
Proceedings of the 8th international conference on Multimodal interfaces
Tracking head pose and focus of attention with multiple far-field cameras
Proceedings of the 8th international conference on Multimodal interfaces
Extraction of important interactions in medical interviewsusing nonverbal information
Proceedings of the 9th international conference on Multimodal interfaces
Multimodalcues for addressee-hood in triadic communication with a human information retrieval agent
Proceedings of the 9th international conference on Multimodal interfaces
Proceedings of the 9th international conference on Multimodal interfaces
Fast and Robust Face Tracking for Analyzing Multiparty Face-to-Face Meetings
MLMI '08 Proceedings of the 5th international workshop on Machine Learning for Multimodal Interaction
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Multimedia Tools and Applications
Feature Extraction and Selection for Inferring User Engagement in an HCI Environment
Proceedings of the 13th International Conference on Human-Computer Interaction. Part I: New Trends
Automatic nonverbal analysis of social interaction in small groups: A review
Image and Vision Computing
Interaction pattern and motif mining method for doctor-patient multi-modal dialog analysis
Proceedings of the ICMI-MLMI '09 Workshop on Multimodal Sensor-Based Systems and Mobile Phones for Social Computing
Recognizing visual focus of attention from head pose in natural meetings
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
ACM Transactions on Applied Perception (TAP)
Visual activity context for focus of attention estimation in dynamic meetings
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Head pose tracking and focus of attention recognition algorithms in meeting rooms
CLEAR'06 Proceedings of the 1st international evaluation conference on Classification of events, activities and relationships
A multi-modal dialogue analysis method for medical interviews based on design of interaction corpus
Personal and Ubiquitous Computing
Conversation scene analysis based on dynamic Bayesian network and image-based gaze detection
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
HCII'11 Proceedings of the 1st international conference on Human interface and the management of information: interacting with information - Volume Part II
A comparison of latent variable models for conversation analysis
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
A study on visual focus of attention recognition from head pose in a meeting room
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Investigating the midline effect for visual focus of attention recognition
Proceedings of the 14th ACM international conference on Multimodal interaction
Learning speaker, addressee and overlap detection models from multimodal streams
Proceedings of the 14th ACM international conference on Multimodal interaction
Recognizing the visual focus of attention for human robot interaction
HBU'12 Proceedings of the Third international conference on Human Behavior Understanding
Grounding and turn-taking in multimodal multiparty conversation
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
Context aware addressee estimation for human robot interaction
Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction
Hi-index | 0.00 |
A novel probabilistic framework is proposed for inferring the structure of conversation in face-to-face multiparty communication, based on gaze patterns, head directions and the presence/absence of utterances. As the structure of conversation, this study focuses on the combination of participants and their participation roles. First, we assess the gaze patterns that frequently appear in conversations, and define typical types of conversation structure, called conversational regime, and hypothesize that the regime represents the high-level process that governs how people interact during conversations. Next, assuming that the regime changes over time exhibit Markov properties, we propose a probabilistic conversation model based on Markov-switching; the regime controls the dynamics of utterances and gaze patterns, which stochastically yield measurable head-direction changes. Furthermore, a Gibbs sampler is used to realize the Bayesian estimation of regime, gaze pattern, and model parameters from observed head directions and utterances. Experiments on four-person conversations confirm the effectiveness of the framework in identifying conversation structures.