A gesture based user interface prototyping system
UIST '89 Proceedings of the 2nd annual ACM SIGGRAPH symposium on User interface software and technology
Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
FlowMenu: combining command, text, and data entry
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
Gestural and audio metaphors as a means of control for mobile devices
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Machine Learning
A design tool for camera-based interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Model-based clustering algorithms, performance and application
Model-based clustering algorithms, performance and application
Georgia tech gesture toolkit: supporting experiments in gesture recognition
Proceedings of the 5th international conference on Multimodal interfaces
a CAPpella: programming by demonstration of context-aware applications
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Modeling human performance of pen stroke gestures
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Experiencing SAX: a novel symbolic representation of time series
Data Mining and Knowledge Discovery
Eyepatch: prototyping camera-based interaction through examples
Proceedings of the 20th annual ACM symposium on User interface software and technology
Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes
Proceedings of the 20th annual ACM symposium on User interface software and technology
Scaling and time warping in time series querying
The VLDB Journal — The International Journal on Very Large Data Bases
iSAX: indexing and mining terabyte sized time series
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Direct manipulation interfaces
Human-Computer Interaction
Sign Language Spotting with a Threshold Model Based on Conditional Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
MAGIC: a motion gesture design tool
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
GART: the gesture and activity recognition toolkit
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments
Gesture search: a tool for fast mobile data access
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Sketched menus and iconic gestures, techniques designed in the context of shareable interfaces
ACM International Conference on Interactive Tabletops and Surfaces
Surface-poker: multimodality in tabletop games
ACM International Conference on Interactive Tabletops and Surfaces
Time series retrieval: indexing and mining large datasets
Time series retrieval: indexing and mining large datasets
Enabling mobile microinteractions
Enabling mobile microinteractions
A Large Scale Gathering System for Activity Data with Mobile Sensors
ISWC '11 Proceedings of the 2011 15th Annual International Symposium on Wearable Computers
Bootstrapping personal gesture shortcuts with the wisdom of the crowd and handwriting recognition
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
Gestures for interfaces should be short, pleasing, intuitive, and easily recognized by a computer. However, it is a challenge for interface designers to create gestures easily distinguishable from users' normal movements. Our tool MAGIC Summoning addresses this problem. Given a specific platform and task, we gather a large database of unlabeled sensor data captured in the environments in which the system will be used (an "Everyday Gesture Library" or EGL). The EGL is quantized and indexed via multi-dimensional Symbolic Aggregate approXimation (SAX) to enable quick searching. MAGIC exploits the SAX representation of the EGL to suggest gestures with a low likelihood of false triggering. Suggested gestures are ordered according to brevity and simplicity, freeing the interface designer to focus on the user experience. Once a gesture is selected, MAGIC can output synthetic examples of the gesture to train a chosen classifier (for example, with a hidden Markov model). If the interface designer suggests his own gesture and provides several examples, MAGIC estimates how accurately that gesture can be recognized and estimates its false positive rate by comparing it against the natural movements in the EGL. We demonstrate MAGIC's effectiveness in gesture selection and helpfulness in creating accurate gesture recognizers.