Charisma: High-performance Web-based MPEG-compliant animation framework
Computers in Entertainment (CIE) - Special Issue: Advances in Computer Entertainment Technology
Comparing feature-based metrics for facial dynamics analysis
Proceedings of the SSPNET 2nd International Symposium on Facial Analysis and Animation
The application of MPEG-4 compliant animation to a modern games engine and animation framework
MIG'10 Proceedings of the Third international conference on Motion in games
Facial expression recognition using tracked facial actions: Classifier performance analysis
Engineering Applications of Artificial Intelligence
Visualization of time-series data in parameter space for understanding facial dynamics
EuroVis'11 Proceedings of the 13th Eurographics / IEEE - VGTC conference on Visualization
Facial expression recognition in dynamic sequences: An integrated approach
Pattern Recognition
Hi-index | 0.00 |
This paper describes a probabilistic framework for faithful reproduction of dynamic facial expressions on a synthetic face model with MPEG-4 facial animation parameters (FAPs) while achieving very low bitrate in data transmission. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the FAPs and facial action coding system (FACS) into a dynamic Bayesian network (DBN) to account for uncertainties in FAP extraction and to model the dynamic evolution of facial expressions. At the synthesizer, a static BN reconstructs the FAPs and their intensity. The two BNs are connected statically through a data stream link. Using the coupled BN to analyze and synthesize the dynamic facial expressions is the major novelty of this work. The novelty brings about several benefits. First, very low bitrate (9 bytes per frame) in data transmission can be achieved. Second, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetected FAPs. Third, more realistic looking facial expressions can be reproduced by modelling the dynamics of human expressions.