MULTIMEDIA '98 Proceedings of the sixth ACM international conference on Multimedia: Face/gesture recognition and their applications
Detecting Faces in Images: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Application of the Self-Organizing Feature Map Algorithm in Facial Image Morphing
Neural Processing Letters
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
IEEE MultiMedia
Motion Control of Virtual Humans
IEEE Computer Graphics and Applications
Motion segmentation and pose recognition with motion history gradients
Machine Vision and Applications - Special issue: IEEE WACV
Head-Pose Invariant Facial Expression Recognition Using Convolutional Neural Networks
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
The ALIVE system: wireless, full-body interaction with autonomous agents
Multimedia Systems - Special issue on multimedia and multisensory virtual worlds
Recognizing Degree of Continuous Facial Expression Change
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 2 - Volume 2
Form representions and means for landmarks: a survey and comparative study
Computer Vision and Image Understanding
ACM SIGGRAPH 2006 Courses
Detecting social interactions of the elderly in a nursing home environment
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
A non-rigid motion estimation algorithm for yawn detection in human drivers
International Journal of Computational Vision and Robotics
International Journal of Intelligent Systems Technologies and Applications
Weight Compensated Motion Estimation for Facial Deformation Analysis
ICIAR '09 Proceedings of the 6th International Conference on Image Analysis and Recognition
Emotional intensity-based facial expression cloning for low polygonal applications
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Eliza meets the wizard-of-oz: blending machine and human control of embodied characters
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Automatic facial expression recognition using boosted discriminatory classifiers
AMFG'07 Proceedings of the 3rd international conference on Analysis and modeling of faces and gestures
From facial expression to level of interest: a spatio-temporal approach
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Candid portrait selection from video
Proceedings of the 2011 SIGGRAPH Asia Conference
Facial expression representation based on timing structures in faces
AMFG'05 Proceedings of the Second international conference on Analysis and Modelling of Faces and Gestures
A study of detecting social interaction with sensors in a nursing home environment
ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
Moments and wavelets for classification of human gestures represented by spatio-temporal templates
AI'04 Proceedings of the 17th Australian joint conference on Advances in Artificial Intelligence
Multi-level face tracking for estimating human head orientation in video sequences
PIT'06 Proceedings of the 2006 international tutorial and research conference on Perception and Interactive Technologies
Online expression mapping for performance-driven facial animation
ICEC'07 Proceedings of the 6th international conference on Entertainment Computing
Techniques for mimicry and identity blending using morph space PCA
ACCV'12 Proceedings of the 11th international conference on Computer Vision - Volume 2
Hi-index | 0.00 |
Previous efforts at facial expression recognition have been based on the Facial Action Coding System (FACS), a representation developed in order to allow human psychologists to code expression from static facial "mugshots." We develop new more accurate representations for facial expression by building a video database of facial expressions and then probabilistically characterizing the facial muscle activation associated with each expression using a detailed physical model of the skin and muscles. This produces a muscle based representation of facial motion, which is then used to recognize facial expressions in two different ways. The first method uses the physics based model directly, by recognizing expressions through comparison of estimated muscle activations. The second method uses the physics based model to generate spatio temporal motion energy templates of the whole face for each different expression. These simple, biologically plausible motion energy "templates" are then used for recognition. Both methods show substantially greater accuracy at expression recognition than has been previously achieved.