Specifying gestures by example
Proceedings of the 18th annual conference on Computer graphics and interactive techniques
The limits of expert performance using hierarchic marking menus
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
Beyond Fitts' law: models for trajectory-based HCI tasks
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Interactive beautification: a technique for rapid geometric design
Proceedings of the 10th annual ACM symposium on User interface software and technology
Visual similarity of pen gestures
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Fluid sketches: continuous recognition and morphing of simple hand-drawn shapes
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
Model for unistroke writing time
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Polygon recognition in sketch-based interfaces with immediate and continuous feedback
Proceedings of the 1st international conference on Computer graphics and interactive techniques in Australasia and South East Asia
Metrics for text entry research: an evaluation of MSD and KSPC, and a new unified error metric
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Task-evoked pupillary response to mental workload in human-computer interaction
CHI '04 Extended Abstracts on Human Factors in Computing Systems
Sketch recognition with continuous feedback based on incremental intention extraction
Proceedings of the 10th international conference on Intelligent user interfaces
Improving drag-and-drop on wall-size displays
GI '05 Proceedings of Graphics Interface 2005
Evaluation of an on-line adaptive gesture interface with command prediction
GI '05 Proceedings of Graphics Interface 2005
Command strokes with and without preview: using pen gestures on keyboard for command selection
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Modeling human performance of pen stroke gestures
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes
Proceedings of the 20th annual ACM symposium on User interface software and technology
Measuring the task-evoked pupillary response with a remote eye tracker
Proceedings of the 2008 symposium on Eye tracking research & applications
Escape: a target selection technique using visually-cued gestures
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
OctoPocus: a dynamic guide for learning gesture-based command sets
Proceedings of the 21st annual ACM symposium on User interface software and technology
Using strokes as command shortcuts: cognitive benefits and toolkit support
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Skinput: appropriating the body as an input surface
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Protractor: a fast and accurate gesture recognizer
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Imaginary interfaces: spatial interaction with empty hands and without visual feedback
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Learning and performance with gesture guides
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
Gestural interfaces are now a familiar mode of user interaction and gestural input is an important part of the way that users can interact with such interfaces. However, entering gestures accurately and efficiently can be challenging. In this paper we present two styles of visual gesture autocompletion for 2D predictive gesture entry. Both styles enable users to abbreviate gestures. We experimentally evaluate and compare both styles of visual autocompletion against each other and against non-predictive gesture entry. The best performing visual autocompletion is referred to as SimpleFlow. Our findings establish that users of SimpleFlow take significant advantage of gesture autocompletion by entering partial gestures rather than whole gestures. Compared to nonpredictive gesture entry, users enter partial gestures that are 41% shorter than the complete gestures, while simultaneously improving the accuracy (+13%, from 68% to 81%) and speed (+10%) of their gesture input. The results provide insights into why SimpleFlow leads to significantly enhanced performance, while showing how predictive gestures with simple visual autocompletion impacts upon the gesture abbreviation, accuracy, speed and cognitive load of 2D predictive gesture entry.