Image Representation Using 2D Gabor Wavelets
IEEE Transactions on Pattern Analysis and Machine Intelligence
A morphable model for the synthesis of 3D faces
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Analysis and Synthesis of Facial Image Sequences Using Physical and Anatomical Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Active Facial Tracking for Fatigue Detection
WACV '02 Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision
Efficient, Robust and Accurate Fitting of a 3D Morphable Model
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Asymmetrically boosted HMM for speech reading
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Face alignment using statistical models and wavelet features
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Learning from facial aging patterns for automatic age estimation
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
A novel automatic lip reading method based on polynomial fitting
AMT'10 Proceedings of the 6th international conference on Active media technology
Hi-index | 0.00 |
Advanced human computer interaction requires automatic reading of human face in order to make the computer interact with human in the same way as human-to-human communication. We developed an automatic face tracking and lip reading system through a 3D face avatar to facilitate HCI applications in speech learning, emotional state monitoring, and non-verbal human computer interface design. The system implements a novel active face feature tracking algorithm with an uncalibrated camera. The 3D face pose is estimated and tracked by a Kalman filter-based matching process with a dynamic face model updating and constraint. The obtained facial motion parameters are transferred to an individualized 3D face avatar. As a result, a person's lip shape or expressions can be cloned to the animated 3D face avatar, by which all lip shapes from the same speech of different subjects can be easily compared and measured. This real time system targets the automatic facial expression analysis and synthesis for the next generation of HCI design.