Using confidence scores to improve hands-free speech based navigation in continuous dictation systems

  • Authors:
  • Jinjuan Feng;Andrew Sears

  • Affiliations:
  • UMBC, Baltimore, MD;UMBC, Baltimore, MD

  • Venue:
  • ACM Transactions on Computer-Human Interaction (TOCHI)
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Speech recognition systems have improved dramatically, but recent studies confirm that error correction activities still account for 66--75% of the users' time, and 50% of that time is spent just getting to the errors that need to be corrected. While researchers have suggested that confidence scores could prove useful during the error correction process, the focus is typically on error detection. More importantly, empirical studies have failed to confirm any measurable benefits when confidence scores are used in this way within dictation-oriented applications. In this article, we provide data that explains why confidence scores are unlikely to be useful for error detection. We propose a new navigation technique for use when speech-only interactions are strongly preferred and common, desktop-sized displays are available. The results of an empirical study that highlights the potential of this new technique are reported. An informal comparison between the current study and previous research suggests the new technique reduces time spent on navigation by 18%. Future research should include additional studies that compare the proposed technique to previous non-speech and speech-based navigation solutions.