Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion

  • Authors:
  • Michael J. Black;Yaser Yacoob

  • Affiliations:
  • Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CA 94304. E-mail: black@parc.xerox.com;Computer Vision Laboratory, University of Maryland, College Park, MD 20742. E-mail: yaser@cs.umd.edu

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper explores the use of local parametrized models of imagemotion for recovering and recognizing the non-rigid and articulatedmotion of human faces. Parametric flow models (for example affine)are popular for estimating motion in rigid scenes. We observe thatwithin local regions in space and time, such models not onlyaccurately model non-rigid facial motions but also provide a concisedescription of the motion in terms of a small number of parameters.These parameters are intuitively related to the motion of facialfeatures during facial expressions and we show how expressions suchas anger, happiness, surprise, fear, disgust, and sadness can berecognized from the local parametric motions in the presence ofsignificant head motion. The motion tracking and expressionrecognition approach performed with high accuracy in extensivelaboratory experiments involving 40 subjects as well as in televisionand movie sequences.