The logic of typed feature structures
The logic of typed feature structures
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Perceptual user interfaces: multimodal interfaces that process what comes naturally
Communications of the ACM
Real-Time Online Adaptive Gesture Recognition
RATFG-RTS '99 Proceedings of the International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems
Unification-based multimodal integration
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
An adaptive approach to collecting multimodal input
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 2
A study of manual gesture-based selection for the PEMMI multimodal transport management interface
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Hi-index | 0.00 |
Multimodal user interfaces can provide natural and efficient interaction between humans and machines in a number of applications. Multimodal fusion integrates information from multiple input channels. Many multimodal fusion approaches exploit the temporal characteristics of inputs to determine the completion of a user's turn and cause a delay in the system response. This paper proposes a multimodal fusion approach, QuickFusion, which utilises syntactic rather than the time-stamp information from inputs to determine the integrity of the inputs, thus avoiding the time delay in the multimodal fusion process. QuickFusion also helps to resolve input ambiguity.