The utility of speech input in user-computer interfaces
International Journal of Man-Machine Studies
Designing the user interface (2nd ed.): strategies for effective human-computer interaction
Designing the user interface (2nd ed.): strategies for effective human-computer interaction
The role of natural language in a multimodal interface
UIST '92 Proceedings of the 5th annual ACM symposium on User interface software and technology
A Java based XML browser for consumer devices
Proceedings of the 2002 ACM symposium on Applied computing
Requirements for Automatically Generating Multi-Modal Interfaces for Complex Appliances
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Abstract user interface representations: how well do they support universal access?
CUU '03 Proceedings of the 2003 conference on Universal usability
A Configurable XForms Implementation
ISMSE '04 Proceedings of the IEEE Sixth International Symposium on Multimedia Software Engineering
Using XForms to simplify Web programming
WWW '05 Proceedings of the 14th international conference on World Wide Web
Tool-supported single authoring for device independence and multimodality
Proceedings of the 7th international conference on Human computer interaction with mobile devices & services
CSS Layout Engine for Compound Documents
LA-WEB '05 Proceedings of the Third Latin American Web Congress
A formal model to handle the adaptability of multimodal user interfaces
Proceedings of the 1st international conference on Ambient media and systems
Delivering interactive multimedia services in dynamic pervasive computing environments
Proceedings of the 1st international conference on Ambient media and systems
Interactive office documents: a new face for web 2.0 applications
Proceedings of the eighth ACM symposium on Document engineering
Enabling adaptive time-based web applications with SMIL state
Proceedings of the eighth ACM symposium on Document engineering
Towards a minimalist multimodal dialogue framework using recursive MVC pattern
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
The WAMI toolkit for developing, deploying, and evaluating web-accessible multimodal interfaces
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
SMIL State: an architecture and implementation for adaptive time-based web applications
Multimedia Tools and Applications
Deriving vocal interfaces from logical descriptions in multi-device authoring environments
ICWE'10 Proceedings of the 10th international conference on Web engineering
Supporting multimodality in service-oriented model-based development environments
HCSE'10 Proceedings of the Third international conference on Human-centred software engineering
Comparison of common XML-based web user interface languages
Journal of Web Engineering
Model-based customizable adaptation of web applications for vocal browsing
Proceedings of the 29th ACM international conference on Design of communication
Multimodal framework for mobile interaction
Proceedings of the International Working Conference on Advanced Visual Interfaces
Hi-index | 0.00 |
The increase in connected mobile computing devices has createdthe need for ubiquitous Web access. In many usagescenarios, it would be beneficial to interact multimodally.Current Web user interface description languages, such asHTML and VoiceXML, concentrate only on one modality.Some languages, such as SALT and X+V, allow combiningaural and visual modalities, but they lack ease-of-authoring,since both modalities have to be authored separately. Thus,for ease-of-authoring and maintainability, it is necessary toprovide a cross-modal user interface language, whose semanticlevel is higher. We propose a novel model, calledXFormsMM, which includes XForms 1.0 combined withmodality-dependent stylesheets and a multimodal interactionmanager. The model separates modality-independentparts from the modality-dependent parts, thus automaticallyproviding most of the user interface to all modalities.The model allows flexible modality changes, so that the usercan decide, which modalities to use and when.