IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dual-State Parametric Eye Tracking
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Automatic Recognition of Eye Blinking in Spontaneously Occurring Behavior
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 4 - Volume 4
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Active Appearance Models Revisited
International Journal of Computer Vision
An active model for facial feature tracking
EURASIP Journal on Applied Signal Processing
Robust online appearance models for visual tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
On Appearance Based Face and Facial Action Tracking
IEEE Transactions on Circuits and Systems for Video Technology
Hierarchical Eyelid and Face Tracking
IbPRIA '07 Proceedings of the 3rd Iberian conference on Pattern Recognition and Image Analysis, Part I
Human head pose estimation using multi-appearance features
KI'10 Proceedings of the 33rd annual German conference on Advances in artificial intelligence
Hi-index | 0.02 |
The ability to detect and track human heads and faces in video sequences is useful in a great number of applications, such as human-computer interaction and gesture recognition. Recently, we have proposed a real-time tracker that simultaneously tracks the 3D head pose and facial actions associated with the lips and the eyebrows in monocular video sequences. The developed approach relies on Online Appearance Models where the facial texture is learned during the tracking. This paper extends our previous work in two directions. First, we show that by adopting a non-occluded facial texture model more accurate and stable 3D head pose parameters can be obtained. Second, unlike previous approaches to eyelid tracking, we show that the Online Appearance Models can be used for this purpose. Neither color information nor intensity edges are used by our proposed approach. Moreover, our eyelids tracking does not rely on any eye feature extraction which may lead to erroneous results whenever the eye feature detector fails. Experiments on real videos show the feasibility and usefulness of the proposed approach