Coupled hidden Markov models for complex action recognition
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Conversational scene analysis
A real-time head nod and shake detector
Proceedings of the 2001 workshop on Perceptive user interfaces
Sensing and modeling human networks
Sensing and modeling human networks
Automatic Analysis of Multimodal Group Actions in Meetings
IEEE Transactions on Pattern Analysis and Machine Intelligence
Contextual recognition of head gestures
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Pose-invariant facial expression recognition using variable-intensity templates
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part I
VACE multimodal meeting corpus
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
A multimodal analysis of floor control in meetings
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Multiclass Support Vector Machines for EEG-Signals Classification
IEEE Transactions on Information Technology in Biomedicine
Fast and Robust Face Tracking for Analyzing Multiparty Face-to-Face Meetings
MLMI '08 Proceedings of the 5th international workshop on Machine Learning for Multimodal Interaction
Pose-Invariant Facial Expression Recognition Using Variable-Intensity Templates
International Journal of Computer Vision
Inferring Human Interactions in Meetings: A Multimodal Approach
UIC '09 Proceedings of the 6th International Conference on Ubiquitous Intelligence and Computing
Automatic nonverbal analysis of social interaction in small groups: A review
Image and Vision Computing
Discovering group nonverbal conversational patterns with topics
Proceedings of the 2009 international conference on Multimodal interfaces
Proceedings of the 2009 international conference on Multimodal interfaces
The idiap wolf corpus: exploring group behaviour in a competitive role-playing game
Proceedings of the international conference on Multimedia
Multimodal sensing, recognizing and browsing group social dynamics
Personal and Ubiquitous Computing
A multi-modal dialogue analysis method for medical interviews based on design of interaction corpus
Personal and Ubiquitous Computing
Conversation scene analysis based on dynamic Bayesian network and image-based gaze detection
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
HCII'11 Proceedings of the 1st international conference on Human interface and the management of information: interacting with information - Volume Part II
3D Facial expression recognition based on histograms of surface differential quantities
ACIVS'11 Proceedings of the 13th international conference on Advanced concepts for intelligent vision systems
Ambient Suite: enhancing communication among multiple participants
Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology
Fully automatic 3D facial expression recognition using a region-based approach
J-HGBU '11 Proceedings of the 2011 joint ACM workshop on Human gesture and behavior understanding
Inferring competitive role patterns in reality TV show through nonverbal analysis
Multimedia Tools and Applications
Visual focus of attention recognition in the ambient kitchen
ACCV'09 Proceedings of the 9th Asian conference on Computer Vision - Volume Part III
Context-based conversational hand gesture classification in narrative interaction
Proceedings of the 15th ACM on International conference on multimodal interaction
Designing effective multimodal behaviors for robots: a data-driven perspective
Proceedings of the 15th ACM on International conference on multimodal interaction
Learning-based modeling of multimodal behaviors for humanlike robots
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
A novel probabilistic framework is proposed for analyzing cross-modal nonverbal interactions in multiparty face-to-face conversations. The goal is to determine "who responds to whom, when, and how" from multimodal cues including gaze, head gestures, and utterances. We formulate this problem as the probabilistic inference of the causal relationship among participants' behaviors involving head gestures and utterances. To solve this problem, this paper proposes a hierarchical probabilistic model; the structures of interactions are probabilistically determined from high-level conversation regimes (such as monologue or dialogue) and gaze directions. Based on the model, the interaction structures, gaze, and conversation regimes, are simultaneously inferred from observed head motion and utterances, using a Markov chain Monte Carlo method. The head gestures, including nodding, shaking and tilt, are recognized with a novel Wavelet-based technique from magnetic sensor signals. The utterances are detected using data captured by lapel microphones. Experiments on four-person conversations confirm the effectiveness of the framework in discovering interactions such as question-and-answer and addressing behavior followed by back-channel responses.