Simulation of face/hairstyle swapping in photographs with skin texture synthesis

  • Authors:
  • Jia-Kai Chou;Chuan-Kai Yang

  • Affiliations:
  • Department of Information Management, National Taiwan University of Science and Technology, Taipei, Republic of China 106;Department of Information Management, National Taiwan University of Science and Technology, Taipei, Republic of China 106

  • Venue:
  • Multimedia Tools and Applications
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

The modern trend of diversification and personalization has encouraged people to boldly express their differentiation and uniqueness in many aspects, and one of the noticeable evidences is the wide variety of hairstyles that we could observe today. Given the needs for hairstyle customization, approaches or systems, ranging from 2D to 3D, or from automatic to manual, have been proposed or developed to digitally facilitate the choice of hairstyles. However, nearly all existing approaches suffer from providing realistic hairstyle synthesis results. By assuming the inputs to be 2D photos, the vividness of a hairstyle re-synthesis result relies heavily on the removal of the original hairstyle, because the co-existence of the original hairstyle and the newly re-synthesized hairstyle may lead to serious artifact on human perception. We resolve this issue by extending the active shape model to more precisely extract the entire facial contour, which can then be used to trim away the hair from the input photo. After hair removal, the facial skin of the revealed forehead needs to be recovered. Since the skin texture is non-stationary and there is little information left, the traditional texture synthesis and image inpainting approaches do not fit to solve this problem. Our proposed method yields a more desired facial skin patch by first interpolating a base skin patch, and followed by a non-stationary texture synthesis. In this paper, we also would like to reduce the user assistance during such a process as much as possible. We have devised a new and friendly facial contour and hairstyle adjusting mechanism that make it extremely easy to manipulate and fit a desired hairstyle onto a face. In addition, our system is also equipped with the functionality of extracting the hairstyle from a given photo, which makes our work more complete. Moreover, by extracting the face from the input photo, our system allows users to exchange faces as well. In the end of this paper, our re-synthesized results are shown, comparisons are made, and user studies are conducted as well to further demonstrate the usefulness of our system.