SmartKom: adaptive and flexible multimodal access to multiple applications
Proceedings of the 5th international conference on Multimodal interfaces
Finite-state multimodal integration and understanding
Natural Language Engineering
A transformational approach for multimodal web user interfaces based on UsiXML
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
A conceptual framework for developing adaptive multimodal applications
Proceedings of the 11th international conference on Intelligent user interfaces
Multimodal communication involving movements of a robot
CHI '08 Extended Abstracts on Human Factors in Computing Systems
Modeling of interaction design by end users through discourse modeling
Proceedings of the 13th international conference on Intelligent user interfaces
UI prototyping for multiple devices through specifying interaction design
INTERACT'07 Proceedings of the 11th IFIP TC 13 international conference on Human-computer interaction
Multimodal interaction abilities for a robot companion
ICVS'08 Proceedings of the 6th international conference on Computer vision systems
Hi-index | 0.00 |
Multimodal applications are typically developed together with their user interfaces, leading to a tight coupling. Additionally, human-computer interaction is often less considered. This potentially results in a worse user interface when additional modalities have to be integrated and/or the application shall be developed for a different device. A promising way of creating multimodal user interfaces with less effort for applications running on several devices is semi-automatic generation. This work shows the generation of multimodal interfaces where a discourse model is transformed to different automatically rendered modalities. It supports loose coupling of the design of human-computer interaction and the integration of specific modalities. The presented communication platform utilizes this transformation process. It allows for high-level integration of input like speech, hand gesture and a WIMP-UI. The generation of output is possible with the modalities speech and GUI. Integration of other input and output modalities is supported as well. Furthermore, the platform is applicable for several applications as well as different devices, e.g., PDAs and PCs.