Using video to create avators in virtual reality
ACM SIGGRAPH 97 Visual Proceedings: The art and interdisciplinary programs of SIGGRAPH '97
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
NPAR '00 Proceedings of the 1st international symposium on Non-photorealistic animation and rendering
Non-photorealistic computer graphics: modeling, rendering, and animation
Non-photorealistic computer graphics: modeling, rendering, and animation
Non-Photorealistic Rendering
Turning to the masters: motion capturing cartoons
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Computer generated animation of faces
ACM '72 Proceedings of the ACM annual conference - Volume 1
MPEG-4 Facial Animation: The Standard,Implementation and Applications
MPEG-4 Facial Animation: The Standard,Implementation and Applications
Faking Dynamics of Ropes and Springs
IEEE Computer Graphics and Applications
CoArt: Co-articulation Region Analysis for Control of 2D Characters
CA '02 Proceedings of the Computer Animation
3D Performance Capture for Facial Animation
3DPVT '04 Proceedings of the 3D Data Processing, Visualization, and Transmission, 2nd International Symposium
Video Avatar: Embedded Video for Collaborative Virtual Environment
ICMCS '99 Proceedings of the 1999 IEEE International Conference on Multimedia Computing and Systems - Volume 02
EmoHeart: conveying emotions in second life based on affect sensing from text
Advances in Human-Computer Interaction - Special issue on emotion-aware natural interaction
Hi-index | 0.01 |
In this paper, our objective is to facilitate the way in which emotion is conveyed through avatars in virtual environments. The established way of achieving this includes the end-user having to manually select his/her emotional state through a text base interface (using emoticons and/or keywords) and applying these pre-defined emotional states on avatars. In contrast to this rather trivial solution, we envisage a system that enables automatic extraction of emotion-related metadata from a video stream, most often originating from a webcam. Contrary to the seemingly trivial solution of sending entire video streams -- which is an optimal solution but often prohibitive in terms of bandwidth usage -- this metadata extraction process enables the system to be deployed in large-scale environments, as the bandwidth required for the communication channel is severely limited.