Hand Motion Gesture Frequency Properties and Multimodal Discourse Analysis
International Journal of Computer Vision
Latent mixture of discriminative experts for multimodal prediction modeling
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Hi-index | 0.00 |
The Catchment Feature Model (CFM) addresses two questions inmultimodal interaction: how do we bridge video and audio processingwith the realities of human multimodal communication, and howinformation from the different modes may be fused. We discuss theneed for our model, motivate the CFM from psycholinguisticresearch, and present the Model. In contrast to 'whole gesture'recognition, the CFM applies a feature decomposition approach thatfacilitates cross-modal fusion at the level of discourse planningand conceptualization. We present our experimental framework forCFM-based research, and cite three concrete examples of CatchmentFeatures (CF), and propose new directions of multimodal researchbased on the model.