Interactive analysis and synthesis of facial expressions based on personal facial expression space

  • Authors:
  • Naiwala P. Chandrasiri;Takeshi Naemura;Hiroshi Harashima

  • Affiliations:
  • The University of Tokyo, School of Information Science & Technology, Tokyo;The University of Tokyo, School of Information Science & Technology, Tokyo;The University of Tokyo, School of Information Science & Technology, Tokyo

  • Venue:
  • FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper novel methods for interactive facial expression analysis and synthesis are presented based on Personal Facial Expression Space (PFES). We proposed PFES to recognize person-specific, primary facial expression image sequences by both temporal and spatial characteristics taken into consideration. On PFES, facial expression parameters which are compatible with MPEG-4 high level Facial Expression Animation Parameters can be extracted from a user's real-time face image and, they are processed to synthesize a face image in real-time. Users can interact by viewing synthesized images and, this feedback leads to interaction. Experimental results are shown to demonstrate the effectiveness of the proposed method. We also have developed user interfaces for analysis and synthesis processes.