Multimodal human discourse: gesture and speech
ACM Transactions on Computer-Human Interaction (TOCHI)
Gesture Patterns during Speech Repairs
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Hand Gesture Symmetric Behavior Detection and Analysis in Natural Conversation
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Hand motion gestural oscillations and multimodal discourse
Proceedings of the 5th international conference on Multimodal interfaces
Multimodal model integration for sentence unit detection
Proceedings of the 6th international conference on Multimodal interfaces
Hand Motion Gesture Frequency Properties and Multimodal Discourse Analysis
International Journal of Computer Vision
Oscillatory gestures and discourse
ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 2
Incorporating gesture and gaze into multimodal models of human-to-human communication
NAACL-DocConsortium '06 Proceedings of the 2006 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: companion volume: doctoral consortium
Real-time adaptive foreground/background segmentation
EURASIP Journal on Applied Signal Processing
The catchment feature model: a device for multimodal fusion and a bridge between signal and sense
EURASIP Journal on Applied Signal Processing
VACE multimodal meeting corpus
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Hi-index | 0.00 |
We present a parallel algorithm for the extraction of human hand gestures from a video sequence that is able to exploit spatial and momentum coherence and color constraints using a fuzzy image integration approach. The dynamics of hand movement are critical to the understanding of gesture. In our deconstruction of hand movement streams into atomic motions we call `strokelets', dynamic information helps in determining the roles of such movements in the gestural stream. The Vector Coherence Mapping (VCM) algorithm is used to extract the motion fields in video. These motion vectors are clustered to obtain hand motions. The parallel nature of the algorithm, and its robustness to motion blur and noise contribute to its effectiveness in gestural motion tracking. The results presented show the efficacy of VCM in the extraction of gestural motion in real video data taken under normal illumination conditions.