CanSpeak: a customizable speech interface for people with dysarthric speech

  • Authors:
  • Foad Hamidi;Melanie Baljko;Nigel Livingston;Leo Spalteholz

  • Affiliations:
  • Department of Computer Science and Engineering, York University, Toronto, Ontario, Canada;Department of Computer Science and Engineering, York University, Toronto, Ontario, Canada;CanAssist, University of Victoria, Victoria, BC, Canada;CanAssist, University of Victoria, Victoria, BC, Canada

  • Venue:
  • ICCHP'10 Proceedings of the 12th international conference on Computers helping people with special needs: Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Current Automatic Speech Recognition (ASR) systems designed to recognize dysarthric speech require an investment in training that involves considerable effort and must be repeated if speech patterns change. We present CanSpeak, a customizable speech recognition interface that does not require automatic training and uses a list of keywords customized for each user. We conducted a preliminary user study with four subjects with dysarthric speech. Customizing the keyword lists doubled the accuracy rate of the system for two of the subjects whose parents and caregivers participated in the customizing task. For the other two subjects only small improvements were observed.