On the road and on the Web?: comprehension of synthetic and human speech while driving
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Time is of the essence: an evaluation of temporal compression algorithms
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Earpod: eyes-free menu selection using touch input and reactive audio feedback
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Blindsight: eyes-free access to mobile phones
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A user study of auditory versus visual interfaces for use while driving
International Journal of Human-Computer Studies
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Audio or tactile feedback: which modality when?
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Cars, calls, and cognition: investigating driving and divided attention
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Earpod: efficient hierarchical eyes-free menu selection
Earpod: efficient hierarchical eyes-free menu selection
Aural browsing on-the-go: listening-based back navigation in large web architectures
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
An empirical investigation into how users adapt to mobile phone auto-locks in a multitask setting
MobileHCI '12 Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Using Checksums to Detect Number Entry Error
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Advanced auditory cues on mobile phones help keep drivers' eyes on the road
Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
International evaluation of NLU benefits in the domain of in-vehicle speech dialog systems
Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Hi-index | 0.01 |
In-car devices that use audio output have been shown to be less distracting than traditional graphical user interfaces, but can be cumbersome and slow to use. In this paper, we report an experiment that demonstrates how these performance characteristics impact whether people will elect to use an audio interface in a multitasking situation. While steering a simulated vehicle, participants had to locate a source of information in a short passage of text. The text was presented either on a visual interface, or using a text-to-speech audio interface. The relative importance of each task was varied. A no-choice/choice paradigm was used in which participants first gained experience with each of the two interfaces, before being given a choice on which interface to use on later trials. The characteristics of the interaction with the interfaces, as measured in the no-choice phase, and the relative importance of each task, had an impact on which output modality was chosen in the choice phase. Participants that prioritized the secondary task tended to select the (faster yet more distracting) visual interface over the audio interface, and as a result had poorer lane keeping performance. This work demonstrates how a user's task objective will influence modality choices with multimodal devices in multitask environments.