IEEE Transactions on Information Technology in Biomedicine
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
Analysis of tongue motions as captured in dynamic ultrasound (US) images has been an important tool in speech research. Previous studies generally required semi-automatic tongue segmentations to perform data analysis. In this paper, we adopt a machine learning approach that does not require tongue segmentation. Specifically, we employ advanced normalization procedures to temporally register the US sequences using their corresponding audio files. To explicitly encode motion, we then register the image frames spatio-temporally to compute a set of deformation fields from which we construct the velocity-based and spatio-temporal gestural descriptors, where the latter explicitly encode tongue dynamics during speech. Next, making use of the recently proposed Histogram Intersection Kernel, we perform support vector machine classification to evaluate the extracted descriptors with a set of clinical measures. We applied our method to speech abnormality and tongue gestures prediction. Overall, differentiating tongue motion, as produced by patients with or without speech impediments on a dataset of 24 US sequences, was achieved with classification accuracy of 94%. When applied to another dataset of 90 US sequences for two other classification tasks, accuracies were 86% and 84%.