Enhancing Silhouette-Based Human Motion Capture with 3D Motion Fields
PG '03 Proceedings of the 11th Pacific Conference on Computer Graphics and Applications
Combining 3D flow fields with silhouette-based human motion capture for immersive video
Graphical Models - Special issue on pacific graphics 2003
Locating nose-tips and estimating head poses in images by tensorposes
IEEE Transactions on Circuits and Systems for Video Technology
Error concealment by means of clustered blockwise PCA
PCS'09 Proceedings of the 27th conference on Picture Coding Symposium
Automatic methods for determining the characteristic points in face image
ICAISC'10 Proceedings of the 10th international conference on Artificial intelligence and soft computing: Part I
Video quality for face detection, recognition, and tracking
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Hi-index | 0.00 |
We show that traditional waveform coding and 3-D model-based coding are not competing alternatives, but should be combined to support and complement each other. Both approaches are combined such that the generality of waveform coding and the efficiency of 3-D model-based coding are available where needed. The combination is achieved by providing the block-based video coder with a second reference frame for prediction, which is synthesized by the model-based coder. The model-based coder uses a parameterized 3-D head model, specifying the shape and color of a person. We therefore restrict our investigations to typical videotelephony scenarios that show head-and-shoulder scenes. Motion and deformation of the 3-D head model constitute facial expressions which are represented by facial animation parameters (FAPs) based on the MPEG-4 standard. An intensity gradient-based approach that exploits the 3-D model information is used to estimate the FAPs, as well as illumination parameters, that describe changes of the brightness in the scene. Model failures and objects that are not known at the decoder are handled by standard block-based motion-compensated prediction, which is not restricted to a special scene content, but results in lower coding efficiency. A Lagrangian approach is employed to determine the most efficient prediction for each block from either the synthesized model frame or the previous decoded frame. Experiments on five video sequences show that bit rate savings of about 35% are achieved at equal average peak signal-to-noise ratio (PSNR) when comparing the model-aided codec to TMN-10, the state-of-the-art test model of the M.263 standard. This corresponds to a gain of 2-3 dB in PSNR when encoding at the same average bit rate