The logic of typed feature structures
The logic of typed feature structures
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
A reference model for multimodal input interpretation
CHI '03 Extended Abstracts on Human Factors in Computing Systems
MATCH: an architecture for multimodal dialogue systems
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
QuickFusion: multimodal fusion without time thresholds
MMUI '05 Proceedings of the 2005 NICTA-HCSNet Multimodal User Interaction Workshop - Volume 57
Formalization of Multimodal Languages in Pervasive Computing Paradigm
Advanced Internet Based Systems and Applications
Hi-index | 0.00 |
Multimodal dialogue systems allow users to input information in multiple modalities. These systems can handle simultaneous or sequential composite multimodal input. Different coordination schemes require such systems to capture, collect and integrate user input in different modalities, and then respond to a joint interpretation. We performed a study to understand the variability of input in multimodal dialogue systems and to evaluate methods to perform the collection of input information. An enhancement in the form of incorporation of a dynamic time window to a multimodal input fusion module was proposed in the study. We found that the enhanced module provides superior temporal characteristics and robustness when compared to previous methods.