Vision-based control of 3D facial animation
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
IEEE Transactions on Pattern Analysis and Machine Intelligence
3D Expressive face model-based tracking algorithm
SPPRA'06 Proceedings of the 24th IASTED international conference on Signal processing, pattern recognition, and applications
Pose Robust Face Tracking by Combining Active Appearance Models and Cylinder Head Models
International Journal of Computer Vision
Real-time facial expression recognition using STAAM and layered GDA classifier
Image and Vision Computing
Efficient particle filtering using RANSAC with application to 3D face tracking
Image and Vision Computing
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Employing social gaze and speaking activity for automatic determination of the Extraversion trait
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Efficient model-based linear head motion recovery from movies
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Affine correspondence based head pose estimation for a sequence of images by using a 3D model
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Learning 3D appearance models from video
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
ICIAR'06 Proceedings of the Third international conference on Image Analysis and Recognition - Volume Part II
Psychology-Aware video-enabled workplace
UIC'06 Proceedings of the Third international conference on Ubiquitous Intelligence and Computing
International Journal of Computer Vision
Hi-index | 0.00 |
This paper presents a method to recover the full-motion (3 rotations and 3 translations) of the head using a cylindrical model. The robustness of the approach is achieved by a combination of three techniques. First, we use the iteratively re-weighted least squares (IRLS) technique in conjunction with the image gradient to deal with non-rigid motion and occlusion. Second, while tracking, the templates are dynamically updated to diminish the effects of self-occlusion and gradual lighting changes and keep tracking the head when most of the face is not visible. Third, because the dynamic templates may cause error accumulation, we re-register images to a reference frame when head pose is close to a reference pose. The performance of the real-time tracking program was evaluated in three separate experiments using image sequences (both synthetic and real) for which ground truth head motion is known. The real sequences included pitch and yaw of as large as 40 and 75, respectively. The average recovery accuracy of the 3D rotations was found to be about 3.