Specifying gestures by example
Proceedings of the 18th annual conference on Computer graphics and interactive techniques
Balancing usability and learning in an interface
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Simple vs. compound mark hierarchical marking menus
Proceedings of the 17th annual ACM symposium on User interface software and technology
SHARK2: a large vocabulary shorthand writing system for pen-based computers
Proceedings of the 17th annual ACM symposium on User interface software and technology
Aspects of HCI research for older people
Universal Access in the Information Society
Modeling human performance of pen stroke gestures
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hard lessons: effort-inducing interfaces benefit spatial learning
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Strategies for accelerating on-line learning of hotkeys
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes
Proceedings of the 20th annual ACM symposium on User interface software and technology
Guidance in the interface and transfer of task performance
Proceedings of the 14th European conference on Cognitive ergonomics: invent! explore!
OctoPocus: a dynamic guide for learning gesture-based command sets
Proceedings of the 21st annual ACM symposium on User interface software and technology
An empirical evaluation of some articulatory and cognitive aspects of marking menus
Human-Computer Interaction
GestureBar: improving the approachability of gesture-based interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Using strokes as command shortcuts: cognitive benefits and toolkit support
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Scale detection for a priori gesture recognition
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The design and evaluation of multitouch marking menus
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
ACM International Conference on Interactive Tabletops and Surfaces
Continuous recognition and visualization of pen strokes and touch-screen gestures
Proceedings of the Eighth Eurographics Symposium on Sketch-Based Interfaces and Modeling
Simpleflow: enhancing gestural interaction with gesture prediction, abbreviation and autocompletion
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part I
Bootstrapping personal gesture shortcuts with the wisdom of the crowd and handwriting recognition
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
YouMove: enhancing movement training with an augmented reality mirror
Proceedings of the 26th annual ACM symposium on User interface software and technology
Hi-index | 0.01 |
Gesture-based interfaces are becoming more prevalent and complex, requiring non-trivial learning of gesture sets. Many methods for learning gestures have been proposed, but they are often evaluated with short-term recall tests that measure user performance, rather than learning. We evaluated four types of gesture guides using a retention and transfer paradigm common in motor learning experiments and found results different from those typically reported with recall tests. The results indicate that many guide systems with higher levels of guidance exhibit high performance benefits while the guide is being used, but are ultimately detrimental to user learning. We propose an adaptive guide that does not suffer from these drawbacks, and that enables a smooth transition from novice to expert. The results contrasting learning and performance can be explained by the guidance hypothesis. They have important implications for the design and evaluation of future gesture learning systems.