Adjusting Shape Parameters Using Model-Based Optical Flow Residuals
IEEE Transactions on Pattern Analysis and Machine Intelligence
International Journal of Computer Vision
Lip Tracking for MPEG-4 Facial Animation
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
An MPEG-4 Tool for Composing 3D Scenes
IEEE MultiMedia
Virtual Avatar Enhanced Nonverbal Communication from Mobile Phones to PCs
Edutainment '08 Proceedings of the 3rd international conference on Technologies for E-Learning and Digital Entertainment
Hi-index | 0.00 |
The emerging MPEG-4 standard supports the transmission and composition of facial animation with natural video. The new standard will include a facial animation parameter (FAP) set that is defined based on the study of minimal facial actions and is closely related to muscle actions. The FAP set enables model-based representation of natural or synthetic talking-head sequences and allows intelligible visual reproduction of facial expressions, emotions, and speech pronunciations at the receiver. This paper addresses the data-compression issue of talking heads and presents three methods for bit-rate reduction of FAPs. Compression efficiency is achieved by way of transform coding, principal component analysis, and FAP interpolation. These methods are independent of each other in nature and thus can be applied in combination to lower the bit-rate demand of FAPs, making possible the transmission of multiple talking heads over band-limited channels. The basic methods described here have been adopted into the MPEG-4 Visual Committee Draft and are readily applicable to other articulation data such as body animation parameters. The efficacy of the methods is demonstrated by both subjective and objective results