Some experiments in audio-visual speech processing

  • Authors:
  • G. Chollet;R. Landais;T. Hueber;H. Bredin;C. Mokbel;P. Perrot;L. Zouari

  • Affiliations:
  • CNRS LTCI/TSI Paris, Paris Cedex 13, France;CNRS LTCI/TSI Paris, Paris Cedex 13, France;CNRS LTCI/TSI Paris, Paris Cedex 13, France and Laboratoire d'Electronique, ESPCI, Paris, France;CNRS LTCI/TSI Paris, Paris Cedex 13, France;University of Balamand, Tripoli, Lebanon;CNRS LTCI/TSI Paris, Paris Cedex 13, France and Institut de Recherche Criminelle de la Gendarmerie Nationale, Rosny sous bois, France;CNRS LTCI/TSI Paris, Paris Cedex 13, France

  • Venue:
  • NOLISP'07 Proceedings of the 2007 international conference on Advances in nonlinear speech processing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Natural speech is produced by the vocal organs of a particular talker. The acoustic features of the speech signal must therefore be correlated with the movements of the articulators (lips, jaw, tongue, velum, ...). For instance, hearing impaired people (and not only them) improve their understanding of speech by lip reading. This chapter is an overview of audiovisual speech processing with emphasis on some experiments concerning recognition, speaker verification, indexing and corpus based synthesis from tongue and lips movements.