Nomadic radio: speech and audio interaction for contextual messaging in nomadic environments
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on human-computer interaction with mobile systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Experiments in mobile spatial audio-conferencing: key-based and gesture-based interaction
Proceedings of the 10th international conference on Human computer interaction with mobile devices and services
Usable gestures for mobile interfaces: evaluating social acceptability
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Aural browsing on-the-go: listening-based back navigation in large web architectures
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.01 |
Graphical user interfaces for mobile devices have several drawbacks in mobile situations. In this paper, we present Foogue, an eyes-free interface that utilizes spatial audio and gesture input. Foogue does not require visual attention and hence does not divert visual attention from the task at hand. Foogue has two modes, which are designed to fit the usage patterns of mobile users. For user input we designed a gesture language build of a limited number of simple but also easy to differentiate gesture elements.