Reflecting user faces in avatars
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Hi-index | 0.00 |
In this paper we describe the way in which emotion-related data can efficiently be exchanged between participants in a large-scale networked virtual environment. This type of metadata is extracted from real-time captured video streams using off-the-shelf webcams and applied onto a two-dimentional (2D) stylised avatar; thereby improving the immersion the user experiences while navigating and communicating in the virtual world. As emotion-related data—once processed through the system—can be considered a specific type of state information, a generic networked virtual environment architecture can be used to distribute the information between participants. We have opted to extend the in-house developed architecture for large-scale virtual interactive communities (ALVIC-NG) architecture to be able to process the information flows. We will show that the inclusion of this new type of information does not have a detrimental effect on the scalability of the system. Copyright © 2009 John Wiley & Sons, Ltd. We describe the way in which emotion-related data can efficiently be exchanged between participants in a large-scale networked virtual environment. This type of metadata is extracted from real-time captured video streams and applied onto a 2D stylised avatar; thereby improving the immersion the user experiences while navigating and communicating in the virtual world.