FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
A transformational approach for multimodal web user interfaces based on UsiXML
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Utilizing Dynamic Executable Models for User Interface Development
Interactive Systems. Design, Specification, and Verification
Proceedings of the 1st ACM SIGCHI symposium on Engineering interactive computing systems
Automatic recognition of finger spelling for LIBRAS based on a two-layer architecture
Proceedings of the 2010 ACM Symposium on Applied Computing
SBGAMES '09 Proceedings of the 2009 VIII Brazilian Symposium on Games and Digital Entertainment
Model-Based Design of Interactions That can Bridge Realities - The Augmented "Drag-and-Drop"
SVR '11 Proceedings of the 2011 XIII Symposium on Virtual Reality
Building multimodal interfaces out of executable, model-based interactors and mappings
HCII'11 Proceedings of the 14th international conference on Human-computer interaction: design and development approaches - Volume Part I
USIXML: a language supporting multi-path development of user interfaces
EHCI-DSVIS'04 Proceedings of the 2004 international conference on Engineering Human Computer Interaction and Interactive Systems
Proceedings of the 11th Brazilian Symposium on Human Factors in Computing Systems
Hi-index | 0.00 |
The gesture-based control of interfaces could enable interaction in situations where hardware controls are missing and support impaired people where other controls fail. The rich spectrum of combining hand postures with movements offers great interaction possibilities but requires extensive user testing to figure out an optimal control with a sufficient control performance and a low error rate. In this paper we describe a declarative, model-based gesture navigation design based on state charts that can be used for the rapid generation of different prototypes to accelerate user testing Band comparison of different interaction controls. We use the declarative modeling to design and generate several variants of a gesture-based interface navigation control. The models are described using state charts and are transformed to state machines at system runtime. They can be directly executed to form a multimodal interaction.