Applying discretized articulatory knowledge to dysarthric speech

  • Authors:
  • Frank Rudzicz

  • Affiliations:
  • University of Toronto, Department of Computer Science, Ontario, Canada M5S 3G4

  • Venue:
  • ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper applies two dynamic Bayes networks that include theoretical and measured kinematic features of the vocal tract, respectively, to the task of labeling phoneme sequences in unsegmented dysarthric speech. Speaker dependent and adaptive versions of these models are compared against two acoustic-only baselines, namely a hidden Markov model and a latent dynamic conditional random field. Both theoretical and kinematic models of the vocal tract perform admirably on speaker-dependent speech, and we show that the statistics of the latter are not necessarily transferable between speakers during adaptation.