A machine learning approach to tongue motion analysis in 2D ultrasound image sequences

  • Authors:
  • Lisa Tang;Ghassan Hamarneh;Tim Bressmann

  • Affiliations:
  • Medical Image Analysis Lab., School of Computing Science, Simon Fraser University;Medical Image Analysis Lab., School of Computing Science, Simon Fraser University;Department of Speech-Language Pathology, Faculty of Medicine, University of Toronto

  • Venue:
  • MLMI'11 Proceedings of the Second international conference on Machine learning in medical imaging
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Analysis of tongue motions as captured in dynamic ultrasound (US) images has been an important tool in speech research. Previous studies generally required semi-automatic tongue segmentations to perform data analysis. In this paper, we adopt a machine learning approach that does not require tongue segmentation. Specifically, we employ advanced normalization procedures to temporally register the US sequences using their corresponding audio files. To explicitly encode motion, we then register the image frames spatio-temporally to compute a set of deformation fields from which we construct the velocity-based and spatio-temporal gestural descriptors, where the latter explicitly encode tongue dynamics during speech. Next, making use of the recently proposed Histogram Intersection Kernel, we perform support vector machine classification to evaluate the extracted descriptors with a set of clinical measures. We applied our method to speech abnormality and tongue gestures prediction. Overall, differentiating tongue motion, as produced by patients with or without speech impediments on a dataset of 24 US sequences, was achieved with classification accuracy of 94%. When applied to another dataset of 90 US sequences for two other classification tasks, accuracies were 86% and 84%.