A longitudinal evaluation of hands-free speech-based navigation during dictation

  • Authors:
  • Jinjuan Feng;Andrew Sears;Clare-Marie Karat

  • Affiliations:
  • Computer and Information Science Department, Towson University, 7800 York Road, Towson, MD 21252, USA;Interactive Systems Research Center, Information Systems Department, UMBC, 1000 Hilltop Circle, Baltimore, MD 21250, USA;IBM TJ Watson Research Center, 19 Skyline Drive Hawthorne, NY 10532, USA

  • Venue:
  • International Journal of Human-Computer Studies
  • Year:
  • 2006

Quantified Score

Hi-index 0.02

Visualization

Abstract

Despite a reported recognition accuracy rate of 98%, speech recognition technologies have yet to be widely adopted by computer users. When considering hands-free use of speech-based solutions, as is the case for individuals with physical impairments that interfere with the use of traditional solutions such as a mouse, the considerable time required to complete basic navigation tasks presents a significant barrier to adoption. Several solutions were proposed to improve navigation efficiency based on the results of a previous study. In the current study, a longitudinal experiment was conducted to investigate the process by which users learn to use hands-free speech-based navigation in the context of large vocabulary, continuous dictation tasks as well the efficacy of the proposed solutions. Due to the influence initial interactions have on the adoption of speech-based solutions, the current study focused on these critical, initial, interactions of individuals with no prior experience using speech-based dictation solutions. Our results confirm the efficacy of the solutions proposed earlier while providing valuable insights into the strategies users employ when using speech-based navigation commands as well as design decisions that can influence these patterns.