Unsupervised temporal segmentation of talking faces using visual cues to improve emotion recognition

  • Authors:
  • Sudha Velusamy;Viswanath Gopalakrishnan;Bilva Navathe;Hariprasad Kannan;Balasubramanian Anand;Anshul Sharma

  • Affiliations:
  • SAIT India, Samsung India Software Operations, Bangalore, India;SAIT India, Samsung India Software Operations, Bangalore, India;SAIT India, Samsung India Software Operations, Bangalore, India;SAIT India, Samsung India Software Operations, Bangalore, India;SAIT India, Samsung India Software Operations, Bangalore, India;SAIT India, Samsung India Software Operations, Bangalore, India

  • Venue:
  • ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The mouth region of human face possesses highly discriminative information regarding the expressions on the face. Facial expression analysis to infer the emotional state of a user becomes very challenging when the user talks, as most of the mouth actions while uttering certain words match with mouth shapes expressing various emotions. We introduce a novel unsupervised method to temporally segment talking faces1 from the faces displaying only emotions, and use the knowledge of talking face segments to improve emotion recognition. The proposed method uses integrated gradient histogram of local binary patterns to represent mouth features suitably and identifies temporal segments of talking faces online by estimating the uncertainties of mouth movements over a period of time. The algorithm accurately identifies talking face segments on a real-world database where talking and emotion happens naturally. Also, the emotion recognition system, using talking face cues, showed considerable improvement in recognition accuracy.