The design and implementation of pie menus
Dr. Dobb's Journal
User learning and performance with marking menus
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Tilting operations for small screen interfaces
Proceedings of the 9th annual ACM symposium on User interface software and technology
TiltType: accelerometer-supported text entry for very small devices
Proceedings of the 15th annual ACM symposium on User interface software and technology
A predictive model of menu performance
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
GI '07 Proceedings of Graphics Interface 2007
You can touch, but you can't look: interacting with in-vehicle systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Writing to your car: handwritten text input while driving
CHI '09 Extended Abstracts on Human Factors in Computing Systems
Motion marking menus: An eyes-free approach to motion input for handheld devices
International Journal of Human-Computer Studies
Design patterns for user interface for mobile applications
Advances in Engineering Software
MAGIC: a motion gesture design tool
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
GesText: accelerometer-based gestural text-entry systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
User-defined motion gestures for mobile interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Experimental analysis of touch-screen gesture designs in mobile environments
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gestural interaction on the steering wheel: reducing the visual demand
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
DoubleFlip: a motion gesture delimiter for mobile interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The effects of walking speed on target acquisition on a touchscreen interface
Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part II
A recognition safety net: bi-level threshold recognition for mobile motion gestures
MobileHCI '12 Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
Gesture-based interaction: a new dimension for mobile user interfaces
Proceedings of the International Working Conference on Advanced Visual Interfaces
A recognition safety net: bi-level threshold recognition for mobile motion gestures
MobileHCI '12 Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
CrowdLearner: rapidly creating mobile recognizers using crowdsourcing
Proceedings of the 26th annual ACM symposium on User interface software and technology
Teaching motion gestures via recognizer feedback
Proceedings of the 19th international conference on Intelligent User Interfaces
Hi-index | 0.00 |
Smartphones are frequently used in environments where the user is distracted by another task, for example by walking or by driving. While the typical interface for smartphones involves hardware and software buttons and surface gestures, researchers have recently posited that, for distracted environments, benefits may exist in using motion gestures to execute commands. In this paper, we examine the relative cognitive demands of motion gestures and surface taps and gestures in two specific distracted scenarios: a walking scenario, and an eyes-free seated scenario. We show, first, that there is no significant difference in reaction time for motion gestures, taps, or surface gestures on smartphones. We further show that motion gestures result in significantly less time looking at the smartphone during walking than does tapping on the screen, even with interfaces optimized for eyes-free input. Taken together, these results show that, despite somewhat lower throughput, there may be benefits to making use of motion gestures as a modality for distracted input on smartphones.