Registration of multimodal data for estimating the parameters of an articulatory model

  • Authors:
  • M. Aron;A. Toutios;M.-O. Berger;E. Kerrien;B. Wrobel-Dautcourt;Y. Laprie

  • Affiliations:
  • LORIA/ CNRS/ INRIA Nancy Grand-Est, BP 101, 54602 Villers les Nancy, France;LORIA/ CNRS/ INRIA Nancy Grand-Est, BP 101, 54602 Villers les Nancy, France;LORIA/ CNRS/ INRIA Nancy Grand-Est, BP 101, 54602 Villers les Nancy, France;LORIA/ CNRS/ INRIA Nancy Grand-Est, BP 101, 54602 Villers les Nancy, France;LORIA/ CNRS/ INRIA Nancy Grand-Est, BP 101, 54602 Villers les Nancy, France;LORIA/ CNRS/ INRIA Nancy Grand-Est, BP 101, 54602 Villers les Nancy, France

  • Venue:
  • ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Being able to animate a speech production model with articulatory data would open applications in many domains. In this paper, we first consider the problem of acquiring articulatory data from non invasive image and sensor modalities: dynamic ultrasound (US) images, stereovision 3D data, electromagnetic sensors and MRI. We here especially focus on automatic registration methods which enable the fusion of the articulatory features in a common frame. We then derive articulatory parameters by fitting these features with Maeda's model. To our knowledge, it is the first attempt to derive articulatory parameters from features automatically extracted and registered between the modalities. Results prove the soundness of the approach and the reliability of the fused articulatory data.