The logic of typed feature structures
The logic of typed feature structures
Unification-based multimodal parsing
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Finite-state multimodal parsing and understanding
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
A multimodal learning interface for sketch, speak and point creation of a schedule chart
Proceedings of the 6th international conference on Multimodal interfaces
A user interface framework for multimodal VR interactions
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
A study of manual gesture-based selection for the PEMMI multimodal transport management interface
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
EMU in the Car: Evaluating Multimodal Usability of a Satellite Navigation System
Interactive Systems. Design, Specification, and Verification
A multimodal pervasive framework for ambient assisted living
Proceedings of the 2nd International Conference on PErvasive Technologies Related to Assistive Environments
Fusion engines for multimodal input: a survey
Proceedings of the 2009 international conference on Multimodal interfaces
Proceedings of the 2009 international conference on Multimodal interfaces
An input-parsing algorithm supporting integration of deictic gesture in natural language interface
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments
A hybrid grammar-based approach to multimodal languages specification
OTM'07 Proceedings of the 2007 OTM confederated international conference on On the move to meaningful internet systems - Volume Part I
Hi-index | 0.00 |
Multimodal User Interaction (MMUI) technology aims at building natural and intuitive interfaces allowing a user to interact with computer in a way similar to human-to-human communication, for example, through speech and gestures. As a critical component in MMUI, Multimodal Input Fusion explores ways to effectively interpret the combined semantic interpretation of user inputs through multiple modalities. This paper presents a novel approach to multi-sensory data fusion based on speech and manual deictic gesture inputs. The effectiveness of the technique has been validated through experiments, using a traffic incident management scenario where an operator interacts with a map on a large display at a distance and issues multimodal commands through speech and manual gestures. The description of the proposed approach and preliminary experiment results are presented.