The human-computer interaction handbook
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Guidelines for multimodal user interface design
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
GI '07 Proceedings of Graphics Interface 2007
User-defined gestures for surface computing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
MakeIt: Integrate User Interaction Times in the Design Process of Mobile Applications
Pervasive '08 Proceedings of the 6th International Conference on Pervasive Computing
Design space for driver-based automotive user interfaces
Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Geremin": 2D microgestures for drivers based on electric field sensing
Proceedings of the 16th international conference on Intelligent user interfaces
Multimodal Input in the Car, Today and Tomorrow
IEEE MultiMedia
Language pattern analysis for automotive natural language speech applications
Proceedings of the 2nd International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Making use of drivers' glances onto the screen for explicit gaze-based interaction
Proceedings of the 2nd International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Gestural interaction on the steering wheel: reducing the visual demand
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
HCII'11 Proceedings of the 14th international conference on Human-computer interaction: interaction techniques and environments - Volume Part II
User experience in speech recognition of navigation devices: an assessment
Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services
Support for modeling interaction with automotive user interfaces
Proceedings of the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Exploring user expectations for context and road video sharing while calling and driving
Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Using tap sequences to authenticate drivers
Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Hi-index | 0.00 |
Implementing controls in the car becomes a major challenge: The use of simple physical buttons does not scale to the increased number of assistive, comfort, and infotainment functions. Current solutions include hierarchical menus and multi-functional control devices, which increase complexity and visual demand. Another option is speech control, which is not widely accepted, as it does not support visibility of actions, fine-grained feedback, and easy undo of actions. Our approach combines speech and gestures. By using speech for identification of functions, we exploit the visibility of objects in the car (e.g., mirror) and simple access to a wide range of functions equaling a very broad menu. Using gestures for manipulation (e.g., left/right), we provide fine-grained control with immediate feedback and easy undo of actions. In a user-centered process, we determined a set of user-defined gestures as well as common voice commands. For a prototype, we linked this to a car interior and driving simulator. In a study with 16 participants, we explored the impact of this form of multimodal interaction on the driving performance against a baseline using physical buttons. The results indicate that the use of speech and gesture is slower than using buttons but results in a similar driving performance. Users comment in a DALI questionnaire that the visual demand is lower when using speech and gestures.