Encapsulating knowledge for intelligent automatic interaction objects selection
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
Model-Based Design and Evaluation of Interactive Applications
Model-Based Design and Evaluation of Interactive Applications
Providing architectural support for building context-aware applications
Providing architectural support for building context-aware applications
Proceedings of the 9th international conference on Intelligent user interfaces
Towards model-based design support for distributed user interfaces
Proceedings of the third Nordic conference on Human-computer interaction
Migratory MultiModal interfaces in MultiDevice environments
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
A task-driven user interface architecture for ambient intelligent environments
Proceedings of the 11th international conference on Intelligent user interfaces
Modeling Multi-Level Context Influence on the User Interface
PERCOMW '06 Proceedings of the 4th annual IEEE international conference on Pervasive Computing and Communications Workshops
Tool support for designing context-sensitive user interfaces using a model-based approach
TAMODIA '05 Proceedings of the 4th international workshop on Task models and diagrams
Splitting rules for graceful degradation of user interfaces
Proceedings of the working conference on Advanced visual interfaces
EHCI-DSVIS'04 Proceedings of the 2004 international conference on Engineering Human Computer Interaction and Interactive Systems
Configurable executable task models supporting the transition from design time to runtime
HCII'11 Proceedings of the 14th international conference on Human-computer interaction: design and development approaches - Volume Part I
Hi-index | 0.00 |
This paper describes an approach that uses task modelling for the development of distributed and multimodal user interfaces. We propose to enrich tasks with possible interaction modalities in order to allow the user to perform these tasks using an appropriate modality. The information of the augmented task model can then be used in a generic runtime architecture we have extended to support runtime decisions for distributing the user interface among several devices based on the specified interaction modalities. The approach was tested in the implementation of several case studies. One of these will be presented in this paper to clarify the approach.