Transferring of Speech Movements from Video to 3D Face Space
IEEE Transactions on Visualization and Computer Graphics
Neighborhood discriminant projection for face recognition
Pattern Recognition Letters
Computers in Human Behavior
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Similarity Features for Facial Event Analysis
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Automatic design of a control interface for a synthetic face
Proceedings of the 14th international conference on Intelligent user interfaces
Modelling human perception of static facial expressions
Image and Vision Computing
Visyllable-specific facial transition motion embedding and extraction
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Adapted active appearance models
Journal on Image and Video Processing
Facial expression recognition on multiple manifolds
Pattern Recognition
Kalman filter-based facial emotional expression recognition
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Vision based speech animation transferring with underlying anatomical structure
ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part I
Appearance manifold of facial expression
ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
System and analysis used for a dynamic facial speech deformation model
ICIAR'10 Proceedings of the 7th international conference on Image Analysis and Recognition - Volume Part I
Cascade MR-ASM for locating facial feature points
ICB'07 Proceedings of the 2007 international conference on Advances in Biometrics
Hi-index | 0.00 |
We propose a novel approach for modeling, tracking and recognizing facial expressions.Our method works on a low dimensional expression manifold, which is obtained by Isomap embedding. In this space, facial contour features are first clustered, using a mixture model. Then, expression dynamics are learned for tracking and classification. We use ICondensation to track facial features in the embedded space, while recognizing facial expressions in a cooperative manner, within a common probabilistic framework. The image observation likelihood is derived from a variation of the Active Shape Model (ASM) algorithm. For each cluster in the low-dimensional space, a specific ASM model is learned, thus avoiding incorrect matching due to non-linear image variations. Preliminary experimental results show that our probabilistic facial expression model on manifold significantly improves facial deformation tracking and expression recognition.