QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
XISL: a language for describing multimodal interaction scenarios
Proceedings of the 5th international conference on Multimodal interfaces
Multimodal interaction with xforms
ICWE '06 Proceedings of the 6th international conference on Web engineering
Proceedings of the 4th international conference on Machine learning for multimodal interaction
MLMI'07 Proceedings of the 4th international conference on Machine learning for multimodal interaction
Multimodal local search in Speak4it
Proceedings of the 16th international conference on Intelligent user interfaces
MVIC: an MVC extension for interactive, multimodal applications
ECSA'13 Proceedings of the 7th European conference on Software Architecture
Review Article: Multimodal interaction: A review
Pattern Recognition Letters
Hi-index | 0.00 |
In recent years multimodal interaction is becoming of great interest thanks to the increasing availability of mobile devices. In this view, many applications making use of speech, gestures on the touch screen and other interaction modalities are presently becoming to appear on the different app-markets. Multimodality requires procedures to integrate different events to be interpreted as a single intention of the user. There is no agreement on how this integration must be realized as well as a shared approach, able to abstract a set of basic functions to be used in any possible multimodal application, is still missing. Designing and implementing multimodal systems is still a difficult task. In response to this situation, the goal of our research is to explore how a simple framework can be used to support the design of multimodal user interfaces. In this paper we propose a framework that aims to help the design of simple multimodal commands in the mobile environment (more specifically in Android applications). The proposed system is based on the standard licensed by the W3C consortium for the Multimodal Interaction [8] [9] and on the definition of a set of CARE [2] properties; moreover the system makes use of some features available in the SMUIML language [3]. We will finally present a case study implementing a mobile GIS application based on the proposed framework.