Detecting Faces in Images: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Coordination of Perceptual Processes for Computer Mediated Communication
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
A Stabilized Adaptive Appearance Changes Model for 3D Head Tracking
RATFG-RTS '01 Proceedings of the IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems (RATFG-RTS'01)
Robust Real-Time Face Detection
International Journal of Computer Vision
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 5 - Volume 05
Model Based Facial Pose Tracking Using a Particle Filter
GMAI '06 Proceedings of the conference on Geometric Modeling and Imaging: New Trends
Face recognition using 2D and disparity eigenface
Expert Systems with Applications: An International Journal
Towards a theory of early visual processing
Neural Computation
Multimodal focus attention and stress detection and feedback in an augmented driver simulator
Personal and Ubiquitous Computing
Modeling visual perception for image processing
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
Drowsy driver detection through facial movement analysis
HCI'07 Proceedings of the 2007 IEEE international conference on Human-computer interaction
Using Human Visual System modeling for bio-inspired low level image processing
Computer Vision and Image Understanding
Determining driver visual attention with one camera
IEEE Transactions on Intelligent Transportation Systems
Using Human Visual System modeling for bio-inspired low level image processing
Computer Vision and Image Understanding
Hi-index | 0.00 |
This paper proposes to demonstrate the advantages of using certain properties of the human visual system in order to develop a set of fusion algorithms for automatic analysis and interpretation of global and local facial motions. The proposed fusion algorithms rely on information coming from human vision models such as human retina and primary visual cortex previously developed at Gipsa-lab. Starting from a set of low level bio-inspired modules (static and moving contour detector, motion event detector and spectrum analyser) which are very efficient for video data pre-processing, it is shown how to organize them together in order to achieve reliable face motion interpretation. In particular, algorithms for global head motion analysis such as head nods, for local eye motion analysis such as blinking, for local mouth motion analysis such as speech lip motion and yawning and for open/close mouth/eye state detection are proposed and their performances are assessed. Thanks to the use of human vision model pre-processing which decorrelates visual information in a reliable manner, fusion algorithms are simplified and remain robust against traditional video acquisition problems (light changes, object detection failure, etc.).