Recognition rate prediction for dysarthric speech disorder via speech consistency score

  • Authors:
  • Prakasith Kayasith;Thanaruk Theeramunkong;Nuttakorn Thubthong

  • Affiliations:
  • Assistive Techn. Center, National Electronics and Computer Techn. Center, Klong Luang, Pathumthani, Thailand and School of Inf. and Computer Techn., Sirindhorn International Inst. of Techn., Thamm ...;School of Information and Computer Technology, Sirindhorn International Institute of Technology, Thammasat University, Klong Luang, Pathumthani, Thailand;Acoustics and Speech Research Laboratory, Department of Physics, Faculty of Science, Chulalongkorn University, Bangkok, Thailand

  • Venue:
  • PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Dysarthria is a collection of motor speech disorder. A severity of dysarthria is traditionally evaluated by human expertise or a group of listener. This paper proposes a new indicator called speech consistency score (SCS). By considering the relation of speech similarity-dissimilarity, SCS can be applied to evaluate the severity of dysarthric speaker. Aside from being used as a tool for speech assessment, SCS can be used to predict the possible outcome of speech recognition as well. A number of experiments are made to compare predicted recognition rates, generated by SCS, with the recognition rates of two well-known recognition systems, HMM and ANN. The result shows that the root mean square error between the prediction rates and recognition rates are less than 7.0% (R2 = 0.74) and 2.5% (R2 = 0.96) for HMM and ANN, respectively. Moreover, to utilized the use of SCS in general case, the test on unknown recognition set showed the error of 11% (R2 = 0.48) for HMM.