From image parsing to painterly rendering

  • Authors:
  • Kun Zeng;Mingtian Zhao;Caiming Xiong;Song-Chun Zhu

  • Affiliations:
  • Lotus Hill Institute, Hubei Province, China;Lotus Hill Institute, Hubei Province, China and University of California, Los Angeles, CA;Lotus Hill Institute, Hubei Province, China;Lotus Hill Institute, Hubei Province, China and University of California, Los Angeles, CA

  • Venue:
  • ACM Transactions on Graphics (TOG)
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a semantics-driven approach for stroke-based painterly rendering, based on recent image parsing techniques [Tu et al. 2005; Tu and Zhu 2006] in computer vision. Image parsing integrates segmentation for regions, sketching for curves, and recognition for object categories. In an interactive manner, we decompose an input image into a hierarchy of its constituent components in a parse tree representation with occlusion relations among the nodes in the tree. To paint the image, we build a brush dictionary containing a large set (760) of brush examples of four shape/appearance categories, which are collected from professional artists, then we select appropriate brushes from the dictionary and place them on the canvas guided by the image semantics included in the parse tree, with each image component and layer painted in various styles. During this process, the scene and object categories also determine the color blending and shading strategies for inhomogeneous synthesis of image details. Compared with previous methods, this approach benefits from richer meaningful image semantic information, which leads to better simulation of painting techniques of artists using the high-quality brush dictionary. We have tested our approach on a large number (hundreds) of images and it produced satisfactory painterly effects.