Specifying gestures by example
Proceedings of the 18th annual conference on Computer graphics and interactive techniques
Single display groupware: a model for co-present collaboration
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Soft machines: A philosophy of user-computer interface design
CHI '83 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Jess in Action: Java Rule-Based Systems
Jess in Action: Java Rule-Based Systems
iGesture: A General Gesture Recognition Framework
ICDAR '07 Proceedings of the Ninth International Conference on Document Analysis and Recognition - Volume 02
A multitouch software architecture
Proceedings of the 5th Nordic conference on Human-computer interaction: building bridges
Unifying events from multiple devices for interpreting user intentions through natural gestures
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part I
Mudra: a unified multimodal interaction framework
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction
Gesture coder: a tool for programming multi-touch gestures by demonstration
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proton: multitouch gestures as regular expressions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 4th ACM SIGCHI symposium on Engineering interactive computing systems
Proceedings of the 10th European conference on Interactive tv and video
Proton++: a customizable declarative multitouch framework
Proceedings of the 25th annual ACM symposium on User interface software and technology
Parallel gesture recognition with soft real-time guarantees
Proceedings of the 3rd annual conference on Systems, programming, and applications: software for humanity
Parallel gesture recognition with soft real-time guarantees
Proceedings of the 2nd edition on Programming systems, languages and applications based on actors, agents, and decentralized control abstractions
GestureAgents: an agent-based framework for concurrent multi-task multi-user interaction
Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction
Gesture studio: authoring multi-touch interactions through demonstration and declaration
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
GestIT: a declarative and compositional framework for multiplatform gesture definition
Proceedings of the 5th ACM SIGCHI symposium on Engineering interactive computing systems
Water Ball Z: an augmented fighting game using water as tactile feedback
Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction
Hi-index | 0.00 |
Over the past few years, multi-touch user interfaces emerged from research prototypes into mass market products. This evolution has been mainly driven by innovative devices such as Apple's iPhone or Microsoft's Surface tabletop computer. Unfortunately, there seems to be a lack of software engineering abstractions in existing multi-touch development frameworks. Many multi-touch applications are based on hard-coded procedural low level event processing. This leads to proprietary solutions with a lack of gesture extensibility and cross-application reusability. We present Midas, a declarative model for the definition and detection of multi-touch gestures where gestures are expressed via logical rules over a set of input facts. We highlight how our rule-based language approach leads to improvements in gesture extensibility and reusability. Last but not least, we introduce JMidas, an instantiation of Midas for the Java programming language and describe how JMidas has been applied to implement a number of innovative multi-touch gestures.