Sketch realizing: lifelike portrait synthesis from sketch

  • Authors:
  • Di Wu;Qionghai Dai

  • Affiliations:
  • Tsinghua University, Beijing, P.R. China;Tsinghua University, Beijing, P.R. China

  • Venue:
  • Proceedings of the 2009 Computer Graphics International Conference
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

People usually visualize their imaginations or memories through sketching. However, it might be difficult to record color and texture details powered by a large database of photographs gathered from the Web. This paper deals with the imagination visualization problem for human faces. A framework for synthesizing lifelike portraits from user specifications and input sketches is proposed which is an inverse process of sketch generation. The proposed framework synthesizes the realistic appearance of a face by taking parts from an annotated face library of photographs and stitching them together followed by further deformation. The algorithm consists of three parts primarily: First, given user specifications and an input sketch, search good matches for each facial component in the library; Second, extract each facial component from matching source images and composite them together; Third, deform the synthesized lifelike portraits to further approximate the sketch. The key component lies in a measurement for finding the right contents to bridge the gap between shapes and lifelike images. A set of diverse lifelike portraits can be synthesized from a single sketch. The effectiveness of the proposed approach is demonstrated with a variety of experimental results.