Multimodal framework for mobile interaction

  • Authors:
  • Francesco Cutugno;Vincenza Anna Leano;Roberto Rinaldi;Gianluca Mignini

  • Affiliations:
  • University of Naples, Naples, Italy;University of Naples, Naples, Italy;University of Naples, Naples, Italy;University of Naples, Naples, Italy

  • Venue:
  • Proceedings of the International Working Conference on Advanced Visual Interfaces
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In recent years multimodal interaction is becoming of great interest thanks to the increasing availability of mobile devices. In this view, many applications making use of speech, gestures on the touch screen and other interaction modalities are presently becoming to appear on the different app-markets. Multimodality requires procedures to integrate different events to be interpreted as a single intention of the user. There is no agreement on how this integration must be realized as well as a shared approach, able to abstract a set of basic functions to be used in any possible multimodal application, is still missing. Designing and implementing multimodal systems is still a difficult task. In response to this situation, the goal of our research is to explore how a simple framework can be used to support the design of multimodal user interfaces. In this paper we propose a framework that aims to help the design of simple multimodal commands in the mobile environment (more specifically in Android applications). The proposed system is based on the standard licensed by the W3C consortium for the Multimodal Interaction [8] [9] and on the definition of a set of CARE [2] properties; moreover the system makes use of some features available in the SMUIML language [3]. We will finally present a case study implementing a mobile GIS application based on the proposed framework.