Multimodal interaction with xforms

  • Authors:
  • Mikko Honkala;Mikko Pohja

  • Affiliations:
  • Helsinki University of Technology: TML, HUT, Finland;Helsinki University of Technology: TML, HUT, Finland

  • Venue:
  • ICWE '06 Proceedings of the 6th international conference on Web engineering
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The increase in connected mobile computing devices has createdthe need for ubiquitous Web access. In many usagescenarios, it would be beneficial to interact multimodally.Current Web user interface description languages, such asHTML and VoiceXML, concentrate only on one modality.Some languages, such as SALT and X+V, allow combiningaural and visual modalities, but they lack ease-of-authoring,since both modalities have to be authored separately. Thus,for ease-of-authoring and maintainability, it is necessary toprovide a cross-modal user interface language, whose semanticlevel is higher. We propose a novel model, calledXFormsMM, which includes XForms 1.0 combined withmodality-dependent stylesheets and a multimodal interactionmanager. The model separates modality-independentparts from the modality-dependent parts, thus automaticallyproviding most of the user interface to all modalities.The model allows flexible modality changes, so that the usercan decide, which modalities to use and when.