A morphable model for the synthesis of 3D faces
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Vision-based control of 3D facial animation
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
Deformation transfer for triangle meshes
ACM SIGGRAPH 2004 Papers
Spacetime faces: high resolution capture for modeling and animation
ACM SIGGRAPH 2004 Papers
Animating blendshape faces by cross-mapping motion capture data
I3D '06 Proceedings of the 2006 symposium on Interactive 3D graphics and games
Facial performance synthesis using deformation-driven polynomial displacement maps
ACM SIGGRAPH Asia 2008 papers
Face/Off: live facial puppetry
Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Pose-space animation and transfer of facial details
Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
ACM SIGGRAPH 2010 papers
High resolution passive facial performance capture
ACM SIGGRAPH 2010 papers
Leveraging motion capture and 3D scanning for high-fidelity facial performance acquisition
ACM SIGGRAPH 2011 papers
High-quality passive facial performance capture using anchor frames
ACM SIGGRAPH 2011 papers
Realtime performance-based facial animation
ACM SIGGRAPH 2011 papers
3D Constrained Local Model for rigid and non-rigid facial tracking
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Lightweight binocular facial performance capture under uncontrolled lighting
ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH Asia 2012
Dynamic 2D/3D registration for the Kinect
ACM SIGGRAPH 2013 Courses
Reconstructing detailed dynamic face geometry from monocular video
ACM Transactions on Graphics (TOG)
Realtime Performance-Based Facial Avatars for Immersive Gameplay
Proceedings of Motion on Games
Hi-index | 0.00 |
We present a new algorithm for realtime face tracking on commodity RGB-D sensing devices. Our method requires no user-specific training or calibration, or any other form of manual assistance, thus enabling a range of new applications in performance-based facial animation and virtual interaction at the consumer level. The key novelty of our approach is an optimization algorithm that jointly solves for a detailed 3D expression model of the user and the corresponding dynamic tracking parameters. Realtime performance and robust computations are facilitated by a novel subspace parameterization of the dynamic facial expression space. We provide a detailed evaluation that shows that our approach significantly simplifies the performance capture workflow, while achieving accurate facial tracking for realtime applications.