CTTE: support for developing and analyzing task models for interactive system design
IEEE Transactions on Software Engineering
Guidelines for multimodal user interface design
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
A discourse model for interaction design based on theories of human communication
CHI '06 Extended Abstracts on Human Factors in Computing Systems
Multimodal User Interfaces: From Signals to Interaction (Signals and Communication Technology)
Multimodal User Interfaces: From Signals to Interaction (Signals and Communication Technology)
Fully automatic user interface generation from discourse models
Proceedings of the 14th international conference on Intelligent user interfaces
Semi-automatically Configured Fission for Multimodal User Interfaces
ACHI '10 Proceedings of the 2010 Third International Conference on Advances in Computer-Human Interactions
Semi-Automatically Generated High-Level Fusion for Multimodal User Interfaces
HICSS '10 Proceedings of the 2010 43rd Hawaii International Conference on System Sciences
From task to dialog model in the UML
TAMODIA'07 Proceedings of the 6th international conference on Task models and diagrams for user interface design
Long-term socially perceptive and interactive robot companions: challenges and future perspectives
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Hi-index | 0.00 |
Automation in the course of user-interface (UI) development has the potential to save resources and time. For graphical user interfaces, quite some research has been performed on their automated generation. While the results are still not in wide-spread use, at least the problems are well understood meanwhile. In contrast, automated generation of multimodal UIs is still in its infancy. We address this problem by proposing a tool-supported process for generating multimodal UIs for dialogue-based interactive systems. For its concrete enactment, we provide tool support for generating a runtime configuration and glue code, respectively. In a nutshell, our approach generates multimodal dialogue-based UIs semi-automatically.