The keystroke-level model for user performance time with interactive systems
Communications of the ACM
Phidgets: easy development of physical interfaces through physical widgets
Proceedings of the 14th annual ACM symposium on User interface software and technology
The Psychology of Human-Computer Interaction
The Psychology of Human-Computer Interaction
Predictive human performance modeling made easy
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Distract-R: rapid prototyping and evaluation of in-vehicle interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
UIST '06 Proceedings of the 19th annual ACM symposium on User interface software and technology
Reflective physical prototyping through integrated design, test, and analysis
UIST '06 Proceedings of the 19th annual ACM symposium on User interface software and technology
A malleable control structure for softwired user interfaces
Proceedings of the 1st international conference on Tangible and embedded interaction
Keystroke-level model for advanced mobile phone interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Investigating five key predictive text entry with combined distance and keystroke modelling
Personal and Ubiquitous Computing
MakeIt: Integrate User Interaction Times in the Design Process of Mobile Applications
Pervasive '08 Proceedings of the 6th International Conference on Pervasive Computing
Design space for driver-based automotive user interfaces
Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Embedded Interaction: Interacting with the Internet of Things
IEEE Internet Computing
Gazemarks: gaze-based visual placeholders to ease attention switching
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gestural interaction on the steering wheel: reducing the visual demand
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Multimodal interaction in the car: combining speech and gestures on the steering wheel
Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Hi-index | 0.00 |
During the last decade, the number of functions of automotive user interfaces has increased rapidly. Besides traditional controls to drive a car, driver assistance, infotainment, entertainment, and comfort systems need to be controlled while driving. This does not only affect the driver's cognitive workload but also leads to increased complexity in designing automotive user interfaces. In this paper, we provide models and tools for rapid prototyping and the evaluation of user interfaces in this context. Usually, functional prototypes of user interfaces are implemented that allow the usability and quality to be assessed with time-consuming user studies. In contrast, in our approach we use an adapted Keystroke-Level Model (KLM) that is based on empirically collected data for typical operations in the car. It takes into account the aspect of attention switching in the car between primary tasks and other tasks. We present KLM operator times that we determined in a user study as well as a formula for estimating the task completion time. The presented model is the foundation for the MI-AUI prototyping tool that we implemented for permitting the creation of automotive interfaces using tangible controls. By demonstrating a typical operation with the MI-AUI prototype, the estimated task completion time can be calculated. MI-AUI is an evaluation tool that can be quickly and easily applied in early stages of the design process without the need to involve real drivers.