Longitudinal study of people learning to use continuous voice-based cursor control

  • Authors:
  • Susumu Harada;Jacob O. Wobbrock;Jonathan Malkin;Jeff A. Bilmes;James A. Landay

  • Affiliations:
  • University of Washington, Seattle, WA, USA;University of Washington, Seattle, WA, USA;University of Washington, Seattle, WA, USA;University of Washington, Seattle, WA, USA;University of Washington, Seattle, WA, USA

  • Venue:
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

We conducted a 2.5 week longitudinal study with five motor impaired (MI) and four non-impaired (NMI) participants, in which they learned to use the Vocal Joystick, a voice-based user interface control system. We found that the participants were able to learn the mapping between the vowel sounds and directions used by the Vocal Joystick, and showed marked improvement in their target acquisition performance. At the end of the ten session period, the NMI group reached the same level of performance as the previously measured "expert" Vocal Joystick performance, and the MI group was able to reach 70% of that level. Two of the MI participants were also able to approach the performance of their preferred device, a touchpad. We report on a number of issues that can inform the development of further enhancements in the realm of voice-driven computer control.