Modeling long-running activities as nested sagas
Data Engineering
Model-Based Design and Evaluation of Interactive Applications
Model-Based Design and Evaluation of Interactive Applications
A framework for robust and flexible handling of inputs with uncertainty
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Midas: a declarative multi-touch interaction framework
Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction
Towards a formalization of multi-touch gestures
ACM International Conference on Interactive Tabletops and Surfaces
A domain specific language to define gestures for multi-touch applications
Proceedings of the 10th Workshop on Domain-Specific Modeling
Mudra: a unified multimodal interaction framework
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction
Proton: multitouch gestures as regular expressions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proton++: a customizable declarative multitouch framework
Proceedings of the 25th annual ACM symposium on User interface software and technology
A compositional model for gesture definition
HCSE'12 Proceedings of the 4th international conference on Human-Centered Software Engineering
Recovery within long-running transactions
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
Gestural interfaces allow complex manipulative interactions that are hardly manageable using traditional event handlers. Indeed, such kind of interaction has longer duration in time than that carried out in form-based user interfaces, and often it is important to provide users with intermediate feedback during the gesture performance. Therefore, the gesture specification code is a mixture of the recognition logic and the feedback definition. This makes it difficult 1) to write maintainable code and 2) reuse the gesture definition in different applications. To overcome these kinds of limitations, the research community has considered declarative approaches for the specification of gesture temporal evolution. In this paper, we discuss the creation of gestural interfaces using GestIT, a framework that allows declarative and compositional definition of gestures for different recognition platforms (e.g. multitouch and full-body), through a set of examples and the comparison with existing approaches.