Specifying gestures by example
Proceedings of the 18th annual conference on Computer graphics and interactive techniques
C4.5: programs for machine learning
C4.5: programs for machine learning
Watch what I do: programming by demonstration
Watch what I do: programming by demonstration
User learning and performance with marking menus
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Shorthand writing on stylus keyboard
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Model-based clustering algorithms, performance and application
Model-based clustering algorithms, performance and application
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes
Proceedings of the 20th annual ACM symposium on User interface software and technology
Using strokes as command shortcuts: cognitive benefits and toolkit support
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Scale detection for a priori gesture recognition
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
MAGIC: a motion gesture design tool
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Software engineering abstractions for the multi-touch revolution
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 2
A lightweight multistroke recognizer for user interface prototypes
Proceedings of Graphics Interface 2010
Midas: a declarative multi-touch interaction framework
Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction
Proton: multitouch gestures as regular expressions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proton: multitouch gestures as regular expressions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gesture-based interaction: a new dimension for mobile user interfaces
Proceedings of the International Working Conference on Advanced Visual Interfaces
Using embodied allegories to design gesture suites for human-data interaction
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
Proton++: a customizable declarative multitouch framework
Proceedings of the 25th annual ACM symposium on User interface software and technology
Memorability of pre-designed and user-defined gesture sets
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The challenges and potential of end-user gesture customization
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
MotionMA: motion modelling and analysis by demonstration
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gesture studio: authoring multi-touch interactions through demonstration and declaration
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
EventHurdle: supporting designers' exploratory interaction prototyping with gesture-based sensors
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
i2ME: a framework for building interactive mockups
Proceedings of the 15th international conference on Human-computer interaction with mobile devices and services
Study of interaction concepts in 3D virtual environment
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
ACM Transactions on Interactive Intelligent Systems (TiiS)
Hi-index | 0.01 |
Multi-touch gestures have become popular on a wide range of touchscreen devices, but the programming of these gestures remains an art. It is time-consuming and error-prone for a developer to handle the complicated touch state transitions that result from multiple fingers and their simultaneous movements. In this paper, we present Gesture Coder, which by learning from a few examples given by the developer automatically generates code that recognizes multi-touch gestures, tracks their state changes and invokes corresponding application actions. Developers can easily test the generated code in Gesture Coder, refine it by adding more examples, and once they are satisfied with its performance integrate the code into their applications. We evaluated our learning algorithm exhaustively with various conditions over a large set of noisy data. Our results show that it is sufficient for rapid prototyping and can be improved with higher quality and more training data. We also evaluated Gesture Coder's usability through a within-subject study in which we asked participants to implement a set of multi-touch interactions with and without Gesture Coder. The results show overwhelmingly that Gesture Coder significantly lowers the threshold of programming multi-touch gestures.