Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
ICARE: a component-based approach for the design and development of multimodal interfaces
CHI '04 Extended Abstracts on Human Factors in Computing Systems
Context based multimodal fusion
Proceedings of the 6th international conference on Multimodal interfaces
Tracking Humans using Multi-modal Fusion
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
Multi-sensory and Multi-modal Fusion for Sentient Computing
International Journal of Computer Vision
CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank
Computational Linguistics
High Level Data Fusion on a Multimodal Interactive Applications Platform
ISM '08 Proceedings of the 2008 Tenth IEEE International Symposium on Multimedia
Linguistically motivated large-scale NLP with C&C and boxer
ACL '07 Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions
Proceedings of the 1st ACM SIGCHI symposium on Engineering interactive computing systems
A survey and analysis of frameworks and framework issues for information fusion applications
HAIS'10 Proceedings of the 5th international conference on Hybrid Artificial Intelligence Systems - Volume Part I
Review Article: Multimodal interaction: A review
Pattern Recognition Letters
Hi-index | 0.00 |
This research aims to propose a multi-modal fusion framework for high-level data fusion between two or more modalities. It takes as input low level features extracted from different system devices, analyses and identifies intrinsic meanings in these data. Extracted meanings are mutually compared to identify complementarities, ambiguities and inconsistencies to better understand the user intention when interacting with the system. The whole fusion life cycle will be described and evaluated in an office environment scenario, where two co-workers interact by voice and movements, which might show their intentions. The fusion in this case is focusing on combining modalities for capturing a context to enhance the user experience.