The use of eye movements in human-computer interaction techniques: what you look at is what you get
ACM Transactions on Information Systems (TOIS) - Special issue on computer—human interaction
Mutual disambiguation of recognition errors in a multimodel architecture
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Unification-based multimodal integration
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
A multimodal learning interface for sketch, speak and point creation of a schedule chart
Proceedings of the 6th international conference on Multimodal interfaces
An efficient unification-based multimodal language processor in multimodal input fusion
OZCHI '07 Proceedings of the 19th Australasian conference on Computer-Human Interaction: Entertaining User Interfaces
CHI '08 Extended Abstracts on Human Factors in Computing Systems
Clavius: bi-directional parsing for generic multimodal interaction
COLING ACL '06 Proceedings of the 21st International Conference on computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Blind and passive digital video tamper detection based on multimodal fusion
ICCOM'10 Proceedings of the 14th WSEAS international conference on Communications
Hi-index | 0.01 |
In a multimodal interface, a user can use multiple modalities, such as speech, gesture, and eye gaze etc., to communicate with a system. As a critical component in a multimodal interface, multimodal input fusion explores the ways to effectively interpret the combined semantic interpretation of user's multimodal inputs. Although multimodal inputs may contain spare information, few multimodal input fusion approaches have tackled how to deal with spare information in multimodal inputs. This paper proposes a novel multimodal input fusion approach to flexibly skip spare information in multimodal inputs and derive semantic interpretation of them. The evaluation about the proposed approach confirms that the approach makes human-computer interaction more natural and smooth.