An interactive facial expression generation system

  • Authors:
  • Chuan-Kai Yang;Wei-Ting Chiang

  • Affiliations:
  • Department of Information Management, National Taiwan University of Science and Technology, Taipei, Taiwan, ROC 106;Department of Information Management, National Taiwan University of Science and Technology, Taipei, Taiwan, ROC 106

  • Venue:
  • Multimedia Tools and Applications
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

How to generate vivid facial expressions by computers has been an interesting and challenging problem for a long time. Some research adopts an anatomical approach by studying the relationships between the expressions and the underlying bones and muscles. On the other hand, MPEG4's SNHC (synthetic/natural hybrid coding) provides mechanisms which allow detailed descriptions of facial expressions and animations. Unlike most existing approaches that ask a user to provide 3D head models, a set of reference images, detailed information of facial feature markers, numerous associated parameters, and/or even non-trivial user assistance, our proposed approach is simple, intuitive and interactive, and most importantly, it is still capable of generating vivid 2D facial expressions. With our system, a user is only required to give a single photo and spend a couple of seconds to roughly mark the positions of eyes, eyebrows and mouth in the photo, and then our system could trace more accurately the contours of these facial features through the technique of active contour. Different expressions can then be generated and morphed via the mesh warping algorithm. Another innovation of this paper is to propose a simple music emotion analysis algorithm, which is coupled with our system to further demonstrate the effectiveness of our facial expression generation. Through such an integration, our system could identify the emotions of a music piece, and display the corresponding emotions via aforementioned synthesized facial expressions. Experimental results show that in general the end-to-end facial generation time, from the time an input photo is given, to the time the final facial expressions are generated, is about 1 min.