Facial expression recognition for human-robot interaction: a prototype

  • Authors:
  • Matthias Wimmer;Bruce A. MacDonald;Dinuka Jayamuni;Arpit Yadav

  • Affiliations:
  • Department of Informatics, Technische Universitat München, Germany;Electrical and Computer Engineering, University of Auckland, New Zealand;Electrical and Computer Engineering, University of Auckland, New Zealand;Electrical and Computer Engineering, University of Auckland, New Zealand

  • Venue:
  • RobVis'08 Proceedings of the 2nd international conference on Robot vision
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

To be effective in the human world robots must respond to human emotional states. This paper focuses on the recognition of the six universal human facial expressions. In the last decade there has been successful research on facial expression recognition (FER) in controlled conditions suitable for human-computer interaction [1,2,3,4,5,6,7,8]. However the human-robot scenario presents additional challenges including a lack of control over lighting conditions and over the relative poses and separation of the robot and human, the inherent mobility of robots, and stricter real time computational requirements dictated by the need for robots to respond in a timely fashion. Our approach imposes lower computational requirements by specifically adapting model-based techniques to the FER scenario. It contains adaptive skin color extraction, localization of the entire face and facial components, and specifically learned objective functions for fitting a deformable face model. Experimental evaluation reports a recognition rate of 70% on the Cohn-Kanade facial expression database, and 67% in a robot scenario, which compare well to other FER systems.