Efficient and robust 3D line drawings using difference-of-Gaussian

  • Authors:
  • Long Zhang;Jiazhi Xia;Xiang Ying;Ying He;Wolfgang Mueller-Wittig;Hock-Soon Seah

  • Affiliations:
  • Hangzhou Dianzi University, Hangzhou City, Zhejiang Province 310018, China and Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore and Fraunhofer IDM@NTU, 50 Nanyang A ...;Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore and Central South University, Changsha City, Hunan Province 410083, China;Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore;Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore;Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore and Fraunhofer IDM@NTU, 50 Nanyang Avenue, Singapore 639798, Singapore;Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore

  • Venue:
  • Graphical Models
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Line drawings are widely used for sketches, animations, and technical illustrations because they are effective and easy to draw. The existing computer-generated lines, such as suggestive contours, apparent ridges, and demarcating curves, adopt the two-pass framework: in the first pass, certain geometric features or properties are extracted or computed in the object space; then in the second pass, the line drawings are rendered by iterating each polygonal face or edge. It is known these approaches are very sensitive to the mesh quality, and usually require appropriate preprocessing operations (e.g. smoothing, remeshing, etc.) to the input meshes. This paper presents a simple yet robust approach to generate view-dependent line drawings for 3D models. Inspired by the image edge detector, we compute the difference-of-Gaussian of illumination on the 3D model. With moderate assumption, we show all the expensive computations can be done in the pre-computing stage. Our method naturally integrates object- and image-spaces in that we compute the geometric features in the object space and then adopt a simple fragment shader to render the lines in the image space. As a result, our algorithm is more efficient than the existing object-space approaches, since the lines are generated in a single pass without iterating the mesh edges/faces. Furthermore, our method is more flexible and robust than the existing algorithms in that it does not require the preprocessing on the input 3D models. Finally, the difference-of-Gaussian operator can be extended to the anisotropic setting guided by local geometric features. The promising experimental results on a wide range of real-world models demonstrate the effectiveness and robustness of our method.