M-Face: An Appearance-Based Photorealistic Model for Multiple Facial Attributes Rendering

  • Authors:
  • Yun Fu;Nanning Zheng

  • Affiliations:
  • Inst. for Adv. Sci. & Technol., Univ. of Illinois, Urbana, IL;-

  • Venue:
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

A novel framework for appearance-based photorealistic facial modeling, called Merging Face (M-Face), is presented and applied to generate emotional facial attributes in rotated views. Assuming that human faces belong to both the linear object class and the Lambertian object class, we span the face space and attribute space, respectively, by using groups of prototypes and merging ratio image (MRI). The MRI is defined as the seamless blend of individual expression ratio image, aging ratio image, and illumination ratio image (quotient image). The M-Face integrates the view space projection, shape caricaturing, and texture MRI-mapping techniques. Derived from the average face, the caricatured shape is reshaped to be more distinct by exaggerating individual distinctiveness, while the rerendered texture multiplies the MRI information during the caricaturing. Based on the M-Face model, the expression morphing, chronological aging or rejuvenating, and illumination variance can be merged seamlessly in a photorealistic style on desired view-rotated faces yielded by view morphing. This framework has the following advantages. First, 3-D reconstruction is avoided without weakening photorealistic effects. Second, the integration of shape caricaturing and texture MRI-mapping proves by experiments to be an efficient and computational inexpensive strategy for realistic face synthesis. Finally, the M-Face is a 2-D parameter-driven model, which highly simplifies user manipulations. The potential applications of M-Face are in various fields like virtual human face, speech-driven talking head, digital painting, film-making, and low-bit-rate communication