Learning a dynamic classification method to detect faces and identify facial expression

  • Authors:
  • Ramana Isukapalli;Ahmed Elgammal;Russell Greiner

  • Affiliations:
  • Bell Labs Innovations, Lucent Technologies, Whippany, NJ;Rutgers University, New Brunswick, NJ;University of Alberta, Edmonton, CA

  • Venue:
  • AMFG'05 Proceedings of the Second international conference on Analysis and Modelling of Faces and Gestures
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

While there has been a great deal of research in face detection and recognition, there has been very limited work on identifying the expression on a face. Many current face detection projects use a [Viola/Jones] style “cascade” of Adaboost-based classifiers to interpret (sub)images — e.g. to identify which regions contain faces. We extend this method by learning a decision tree of such classifiers (dtc): While standard cascade classification methods will apply the same sequence of classifiers to each image, our dtc is able to select the most effective classifier at every stage, based on the outcomes of the classifiers already applied. We use dtc not only to detect faces in a test image, but to identify the expression on each face.