Acquiring the reflectance field of a human face
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
A practical model for subsurface light transport
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
A practical model for subsurface light transport
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
A rapid hierarchical rendering technique for translucent materials
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Facial expression recognition using a dynamic model and motion energy
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Universal capture: image-based facial animation for "The Matrix Reloaded"
ACM SIGGRAPH 2003 Sketches & Applications
Spacetime faces: high resolution capture for modeling and animation
ACM SIGGRAPH 2004 Papers
Reflectance field rendering of human faces for "Spider-Man 2"
SIGGRAPH '04 ACM SIGGRAPH 2004 Sketches
Animatable facial reflectance fields
EGSR'04 Proceedings of the Fifteenth Eurographics conference on Rendering Techniques
Face/Off: live facial puppetry
Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Hi-index | 0.00 |
Modeling a face and rendering it in a manner that appears realistic is a hard problem in itself, and remarkable progress to achieve realistic looking faces has been made from a modeling perspective [1, 6, 13, 15, 16, 2] as well as a rendering perspective [5, 11, 12]. At last years Siggraph 2005, the course of Digital Face Cloning described relevant material to this end. An even bigger problem is animating the digital face in a realistic and believable manner that stands up to close scrutiny, where even the slightest incorrectness in the animated performance becomes egregiously unacceptable. While good facial animation (stylized and realistic) can be attempted via traditional key frame techniques by skilled animators, it is complicated and often a time consuming task especially as the desired results approach realistic imagery. When an exact replica of an actor's performance is desired, many processes today work by tracking features on the actor face and using information derived from these tracked features to directly drive the digital character. These features, range from a few marker samples [3], curves or contours [15] on the face and even a deforming surface of the face [2, 16]. This may seem like a one stop process where the derived data of the performance of an act can be made to programmatically translate to animations on a digital CG face. On the contrary, given today's technologies in capture, retargeting and animation, this can turn out to be a rather involved process depending on the quality of data, the exactness/realness required in the final animation, facial calibration and often requires expertise of both artists (trackers, facial riggers, technical animators) and software technology to make the end product happen. Also, setting up a facial pipeline that involves many actors' performances captured simultaneously to ultimately produce hundreds of shots, with the need to embrace inputs and controls from artists/animators can be quite a challenge. This course documents attempts to explain some of the processes that we have understood and by which we have gained experience by working on Columbia's Monster House and other motion capture-reliant shows at Sony Pictures Imageworks.The document is organized by first explaining general ideas on what constitutes a performance in section 1. Section 2 explains how facial performance is captured using motion capture technologies at Imageworks. The next section 3 explains the background research that forms the basis of our facial system at Imageworks -- FACS, which was initially devised by Paul Eckman et al. Although FACS has been used widely in research and literature [7], at Sony Pictures Imageworks we have used it on motion captured facial data to drive character faces. The following sections 4, 5, 6 explain how motion captured facial data is processed, stabilized, cleaned and finally retargeted onto a digital face. Finally, we conclude with a motivating discussion that relates to artistic versus software problems in driving a digital face with a performance.