Statistical lip-appearance models trained automatically using audio information

  • Authors:
  • Philippe Daubias;Paul Deléglise

  • Affiliations:
  • Laboratoire d'Informatique de l'Université du Maine (LIUM), Institut d'Informatique Claude Chappe, Le Mans Cedex, France and Laboratoire d'Informatique Graphique Image et Modélisation (L ...;Laboratoire d'Informatique de l'Université du Maine (LIUM), Institut d'Informatique Claude Chappe, Le Mans Cedex, France

  • Venue:
  • EURASIP Journal on Applied Signal Processing
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We aim at modeling the appearance of the lower face region to assist visual feature extraction for audio-visual speech processing applications. In this paper, we present a neural network based statistical appearance model of the lips which classifies pixels as belonging to the lips, skin, or inner mouth classes. This model requires labeled examples to be trained, and we propose to label images automatically by employing a lip-shape model and a red-hue energy function. To improve the performance of lip-tracking, we propose to use blue marked-up image sequences of the same subject uttering the identical sentences as natural nonmarked-up ones. The easily extracted lip shapes from blue images are then mapped to the natural ones using acoustic information. The lip-shape estimates obtained simplify lip-tracking on the natural images, as they reduce the parameter space dimensionality in the red-hue energy minimization, thus yielding better contour shape and location estimates. We applied the proposed method to a small audio-visual database of three subjects, achieving errors in pixel classification around 6%, compared to 3% for hand-placed contours and 20% for filtered red-hue.