The virtual human as a multimodal interface
AVI '00 Proceedings of the working conference on Advanced visual interfaces
Using virtual humans for multimodal communication in virtual reality and augmented reality
Multimodal interface for human-machine communication
Robust and Rapid Generation of Animated Faces from Video Images: A Model-Based Modeling Approach
International Journal of Computer Vision - Special Issue on Research at Microsoft Corporation
Illustrative scientific visualization framework
Computational Aesthetics'05 Proceedings of the First Eurographics conference on Computational Aesthetics in Graphics, Visualization and Imaging
Hi-index | 0.00 |
We show that we can effectively fit complex animation models to noisy image data. Our approach is based on robust least-squares adjustment and takes advantage of three complementary sources of information: stereo data, silhouette edges and 2--D feature points. We take stereo to be our main information source and use the other two whenever available.In this way, complete head models---including ears and hair---can be acquired with a cheap and entirely passive sensor, such as an ordinary video camera. The motion parameters of limbs can be similarly captured. They can then be fed to existing animation software to produce synthetic sequences.