The Catchment Feature Model for Multimodal Language Analysis

  • Authors:
  • Francis Quek

  • Affiliations:
  • -

  • Venue:
  • ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Catchment Feature Model (CFM) addresses two questions inmultimodal interaction: how do we bridge video and audio processingwith the realities of human multimodal communication, and howinformation from the different modes may be fused. We discuss theneed for our model, motivate the CFM from psycholinguisticresearch, and present the Model. In contrast to 'whole gesture'recognition, the CFM applies a feature decomposition approach thatfacilitates cross-modal fusion at the level of discourse planningand conceptualization. We present our experimental framework forCFM-based research, and cite three concrete examples of CatchmentFeatures (CF), and propose new directions of multimodal researchbased on the model.