A Computational Approach to Edge Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multimodal human discourse: gesture and speech
ACM Transactions on Computer-Human Interaction (TOCHI)
A Parallel Algorithm for Dynamic Gesture Tracking
RATFG-RTS '99 Proceedings of the International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems
The catchment feature model: a device for multimodal fusion and a bridge between signal and sense
EURASIP Journal on Applied Signal Processing
Meeting room configuration and multiple camera calibration in meeting analysis
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Hi-index | 0.00 |
Gesture and speech are part of a single human language system. They are co-expressive and complementary channels in the act of speaking. While speech carries the major load of symbolic presentation, gesture provides the imagistic content. Proceeding from the established cotemporality of gesture and speech, we discuss our work on oscillatory gestures and speech. We present our wavelet-based approach in gestural oscillation extraction as geodesic ridges in frequency-time space. We motivate the potential of such computational cross-modal language analysis by performing a micro analysis of a video dataset in which a subject describes her living space. We demonstrate the ability of our algorithm to extract gestural oscillations and show how oscillatory gestures reveal portions of the discourse structure.