Hands-free, speech-based navigation during dictation: difficulties, consequences, and solutions

  • Authors:
  • Andrew Sears;Jinjuan Feng;Kwesi Oseitutu;Clare-Marie Karat

  • Affiliations:
  • Interactive Systems Research Center, Information Systems Department, UMBC, Baltimore, MD;Interactive Systems Research Center, Information Systems Department, UMBC, Baltimore, MD;IBM T. J. Watson Research Center, Hawthorne, NY and UMBC;IBM T. J. Watson Research Center, Hawthorne, NY

  • Venue:
  • Human-Computer Interaction
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Speech recognition technology continues to improve, but users still experience significant difficulty using the software to create and edit documents. In fact, a recent study confirmed that users spent 66% of their time on correction activities and only 33% on dictation. Of particular interest is the fact that one third of the users' time was spent simply navigating from one location to another. In this article, we investigate the efficacy of hands-free, speech-based navigation in the context of dictation-oriented activities. We provide detailed data regarding failure rates, reasons for failures, and the consequences of these failures. Our results confirm that direction-oriented navigation (e.g., Move up two lines) is less effective than target-oriented navigation (e.g. Select target). We identify the three most common reasons behind the failure of speech-based navigation commands: recognition errors, issuing of invalid commands, and pausing in the middle of issuing a command. We also document the consequences of failed speech-based navigation commands. As a result of this analysis, we identify changes that will reduce failure rates and lessen the consequences of some remaining failures. We also propose a more substantial set of changes to simplify direction-based navigation and enhance the target-based navigation. The efficacy of this final set of recommendations must be evaluated through future empirical studies.