A Simple Method for 3-Dimensional Photorealistic Facial Modeling and Consideration the Reconstructing Error

  • Authors:
  • Ippei Torii;Yousuke Okada;Masayuki Mizutani;Naohiro Ishii

  • Affiliations:
  • Management and Information Science, Aichi Institute of Technology, Toyota-shi, Japan;Management and Information Science, Aichi Institute of Technology, Toyota-shi, Japan;Management and Information Science, Aichi Institute of Technology, Toyota-shi, Japan;Management and Information Science, Aichi Institute of Technology, Toyota-shi, Japan

  • Venue:
  • KES '09 Proceedings of the 13th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems: Part II
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The process of creating photorealistic 3-dimensional computer graphic (3DCG) images is divided into two stages, i.e., modeling and rendering. Automatic rendering has gained popularity, and photorealistic rendering is generally used to render different types of images. However, professional artists still model characters manually. Moreover, not many progresses have been achieved with regard to 3-D shape data acquisition techniques that can be applied to facial modeling; this is an important problem hampering the progress of 3DCG. Generally, a laser and a highly accurate camera are used to acquire 3-D shape data. However, this technique is time-consuming and expensive. Further, the eyes may be damaged during measurements by this method. In order to solve these problems, we have proposed a simple method for 3-D shape data acquisition using a projector and a web camera. This method is economical, simple, and less time-consuming than conventional techniques. In this paper, we describe the setup of the projector and web camera, shape data acquisition process, image processing, and generation of a photorealistic image. We evaluate the error margin. We also verify the accuracy of this method by comparing the photograph of a face with its rendered image. After that, we pick up only labial and mouth part from obtained facial modeling data and expand it into animation.