Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion
International Journal of Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
CVPR '96 Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96)
Motion Regularization for Model-Based Head Tracking
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume III-Volume 7276 - Volume 7276
Fast Stereo-Based Head Tracking for Interactive Environments
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Lucas-Kanade 20 Years On: A Unifying Framework
International Journal of Computer Vision
Robust Real-Time Face Detection
International Journal of Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Stable Real-Time 3D Tracking Using Online and Offline Information
IEEE Transactions on Pattern Analysis and Machine Intelligence
Efficient particle filtering using RANSAC with application to 3D face tracking
Image and Vision Computing
Adaptive view-based appearance models
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
An investigation of model bias in 3d face tracking
AMFG'05 Proceedings of the Second international conference on Analysis and Modelling of Faces and Gestures
Robust face alignment based on hierarchical classifier network
ECCV'06 Proceedings of the 2006 international conference on Computer Vision in Human-Computer Interaction
Hi-index | 0.00 |
This paper addresses the problem of 3D face tracking from a monocular view. Dominant tracking algorithms in current literature can be classified as intensity-based or feature-based methods. Intensity-based methods track 3D faces based on the brightness constraint, assuming constant intensity of the face across adjacent frames. Feature-based trackers use local 2D features to determine sparse pairs of corresponding points between two frames and estimate 3D pose from these correspondences. We argue that using either approach alone neglects valuable visual information used in the other method. We therefore propose a novel hybrid tracking approach that integrates multiple visual cues. The hybrid tracker uses a nonlinear optimization framework to incorporate both feature correspondence and brightness constraints, and achieves reliable 3D face tracking in real-time. We conduct a series of experiments to analyze our approach and compare its performance with other state-of-the-art trackers. The experiments consist of synthetic sequences with simulated environmental factors and real-world sequences with estimated ground truth. Results show that the hybrid tracker is superior in both accuracy and robustness, particularly when dealing with challenging conditions such as occlusion and extreme lighting. We close with a description of a real-world human-computer interaction application based on our hybrid tracker.