Past, present, and future of user interface software tools
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on human-computer interaction in the new millennium, Part 1
Generating remote control interfaces for complex appliances
Proceedings of the 15th annual ACM symposium on User interface software and technology
A transformational approach for multimodal web user interfaces based on UsiXML
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Authoring interfaces with combined use of graphics and voice for both stationary and mobile devices
Proceedings of the working conference on Advanced visual interfaces
Multimodal interaction with xforms
ICWE '06 Proceedings of the 6th international conference on Web engineering
Employing patterns and layers for early-stage design and prototyping of cross-device user interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A Model-Driven Approach to Content Repurposing
IEEE MultiMedia
Proceedings of the 1st ACM SIGCHI symposium on Engineering interactive computing systems
ACM Transactions on Computer-Human Interaction (TOCHI)
Deriving vocal interfaces from logical descriptions in multi-device authoring environments
ICWE'10 Proceedings of the 10th international conference on Web engineering
End-user development of e-government services through meta-modeling
IS-EUD'11 Proceedings of the Third international conference on End-user development
Flexible support for distributing user interfaces across multiple devices
Proceedings of the 9th ACM SIGCHI Italian Chapter International Conference on Computer-Human Interaction: Facing Complexity
A meta-design approach to the development of e-government services
Journal of Visual Languages and Computing
Hi-index | 0.00 |
While multimodal interfaces are becoming more and more used and supported, their development is still difficult and there is a lack of authoring tools for this purpose. The goal of this work is to discuss how multimodality can be specified in model-based languages and apply such solution to the composition of graphical and vocal interactions. In particular, we show how to provide structured support that aims to identify the most suitable solutions for modelling multimodality at various detail levels. This is obtained using, amongst other techniques, the well-known CARE properties in the context of a model-based language able to support service-based applications and modern Web 2.0 interactions. The method is supported by an authoring environment, which provides some specific solutions that can be modified by the designers to better suit their specific needs, and is able to generate implementations of multimodal interfaces in Web environments. An example of modelling a multimodal application and the corresponding, automatically generated, user interfaces is reported as well.