Intelligent multi-media interface technology
Intelligent user interfaces
Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
Integrated interfaces for decision-support with simulation
WSC '91 Proceedings of the 23rd conference on Winter simulation
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Designing Transition Networks for Multimodal VR-Interactions Using a Markup Language
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Advances in Robust Multimodal Interface Design
IEEE Computer Graphics and Applications
Unification-based multimodal integration
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Finite-state multimodal integration and understanding
Natural Language Engineering
ICASSP '96 Proceedings of the Acoustics, Speech, and Signal Processing, 1996. on Conference Proceedings., 1996 IEEE International Conference - Volume 06
DEAL: dialogue management in SCXML for believable game characters
Future Play '07 Proceedings of the 2007 conference on Future Play
Building multimodal applications with EMMA
Proceedings of the 2009 international conference on Multimodal interfaces
Fusion engines for multimodal input: a survey
Proceedings of the 2009 international conference on Multimodal interfaces
Mudra: a unified multimodal interaction framework
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Modeling parallel state charts for multithreaded multimodal dialogues
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
A formal description of multimodal interaction techniques for immersive virtual reality applications
INTERACT'05 Proceedings of the 2005 IFIP TC13 international conference on Human-Computer Interaction
Hi-index | 0.00 |
In this paper we present a novel approach to the combined modeling of multimodal fusion and interaction management. The approach is based on a declarative multimodal event logic that allows the integration of inputs distributed over multiple modalities in accordance to spatial, temporal and semantic constraints. In conjunction with a visual state chart language, our approach supports the incremental parsing and fusion of inputs and a tight coupling with interaction management. The incremental and parallel parsing approach allows us to cope with concurrent continuous and discrete interactions and fusion on different levels of abstraction. The high-level visual and declarative modeling methods support rapid prototyping and iterative development of multimodal systems.