An empirical comparison of pie vs. linear menus
CHI '88 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The design and evaluation of marking menus
The design and evaluation of marking menus
Sensing techniques for mobile interaction
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Distant freehand pointing and clicking on very large, high resolution displays
Proceedings of the 18th annual ACM symposium on User interface software and technology
User experiences with mobile phone camera game interfaces
MUM '05 Proceedings of the 4th international conference on Mobile and ubiquitous multimedia
Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes
Proceedings of the 20th annual ACM symposium on User interface software and technology
OctoPocus: a dynamic guide for learning gesture-based command sets
Proceedings of the 21st annual ACM symposium on User interface software and technology
GestureBar: improving the approachability of gesture-based interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Virtual shelves: interactions with orientation aware devices
Proceedings of the 22nd annual ACM symposium on User interface software and technology
GesText: accelerometer-based gestural text-entry systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
User-defined motion gestures for mobile interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
DoubleFlip: a motion gesture delimiter for mobile interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Tap, swipe, or move: attentional demands for distracted smartphone input
Proceedings of the International Working Conference on Advanced Visual Interfaces
A recognition safety net: bi-level threshold recognition for mobile motion gestures
MobileHCI '12 Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
Hi-index | 0.00 |
When using motion gestures, 3D movements of a mobile phone, as an input modality, one significant challenge is how to teach end users the movement parameters necessary to successfully issue a command. Is a simple video or image depicting movement of a smartphone sufficient? Or do we need three-dimensional depictions of movement on external screens to train users? In this paper, we explore mechanisms to teach end users motion gestures, examining two factors. The first factor is how to represent motion gestures: as icons that describe movement, video that depicts movement using the smartphone screen, or a Kinect-based teaching mechanism that captures and depicts the gesture on an external display in three-dimensional space. The second factor we explore is recognizer feedback, i.e. a simple representation of the proximity of a motion gesture to the desired motion gesture based on a distance metric extracted from the recognizer. We show that, by combining video with recognizer feedback, participants master motion gestures equally quickly as end users that learn using a Kinect. These results demonstrate the viability of training end users to perform motion gestures using only the smartphone display.