Locating Facial Features with an Extended Active Shape Model
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part IV
Deformable Model Fitting by Regularized Landmark Mean-Shift
International Journal of Computer Vision
Hi-index | 0.00 |
Imitation of natural facial behavior in real-time is still challenging when it comes to natural behavior such as laughter and nonverbal expressions. This paper explains our ongoing work on methodologies and tools for estimating Facial Animation Parameters (FAPs) and intensities of Action Units (AUs) in order to imitate lifelike facial expressions with an MPEG-4 complaint Embodied Conversational Agent (ECA) -- The GRETA agent (Bevacqua et al. 2007). Firstly, we investigate available open source tools for better facial landmark localization. Secondly, FAPs and intensities of AUs are estimated based on facial landmarks computed with an open source face tracker tool. Finally, the paper discusses our ongoing work to investigate better re-synthesis technology among FAP-based and AU-based synthesis technologies using perceptual studies on: (i) naturalness in synthesized facial expressions; (ii) similarity perceived by the subjects when compared to original user's behavior.