High-Resolution Face Fusion for Gender Conversion

  • Authors:
  • Jinli Suo;Liang Lin;Shiguang Shan;Xilin Chen;Wen Gao

  • Affiliations:
  • Grad. Univ. of the Chinese Acad. of Sci., Beijing, China;-;-;-;-

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an integrated face image fusion framework, which combines a hierarchical compositional paradigm with seamless image-editing techniques, for gender conversion. In our framework a high-resolution face is represented by a probabilistic graphical model that decomposes a human face into several parts (facial components) constrained by explicit spatial configurations (relationships). Benefiting from this representation, the proposed fusion strategy is able to largely preserve the face identity of each facial component while applying gender transformation. Given a face image, the basic idea is to select reference facial components from the opposite-gender group as templates and transform the appearance of the given image toward the selected facial components. Our fusion approach decomposes a face image into two parts-sketchable and nonsketchable ones. For the sketchable regions (e.g., the contours of facial components and wrinkle lines, etc.), we use a graph-matching algorithm to find the best templates and transform the structure (shape), while for the nonsketchable regions (e.g., the texture area of facial components, skin, etc.), we learn active appearance models and transform the texture attributes in the corresponding principal component analysis space. Both objective and subjective quantitative evaluation results on 200 Asian frontal-face images selected from the public Lotus Hill Image database show that the proposed approach is able to give plausible gender conversion results.