BodyChat: autonomous communicative behaviors in avatars
AGENTS '98 Proceedings of the second international conference on Autonomous agents
Embrace your limitations—cut-scenes in computer games
ACM SIGGRAPH Computer Graphics
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Non-verbal cues for discourse structure
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Gesture modeling and animation based on a probabilistic re-creation of speaker style
ACM Transactions on Graphics (TOG)
Clone attack! Perception of crowd variety
ACM SIGGRAPH 2008 papers
Creating crowd variation with the OCEAN personality model
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 3
Modeling Groups of Plausible Virtual Pedestrians
IEEE Computer Graphics and Applications
Fitting behaviors to pedestrian simulations
Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Talking bodies: Sensitivity to desynchronization of conversations
ACM Transactions on Applied Perception (TAP)
Real-time prosody-driven synthesis of body language
ACM SIGGRAPH Asia 2009 papers
Simulating believable crowd and group behaviors
ACM SIGGRAPH ASIA 2010 Courses
Facial animation for real-time conversing groups
Proceedings of the SSPNET 2nd International Symposium on Facial Analysis and Animation
How to train your avatar: a data driven approach to gesture generation
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Sleight of hand: perception of finger motion from reduced marker sets
I3D '12 Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games
Injury assessment for physics-based characters
MIG'11 Proceedings of the 4th international conference on Motion in Games
Animating synthetic dyadic conversations with variations based on context and agent attributes
Computer Animation and Virtual Worlds
Perceptually plausible formations for virtual conversers
Computer Animation and Virtual Worlds
Technical Section: Stream-based animation of real-time crowd scenes
Computers and Graphics
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
It's alive!: exploring the design space of a gesturing phone
Proceedings of Graphics Interface 2013
Hi-index | 0.00 |
In many scenes with human characters, interacting groups are an important factor for maintaining a sense of realism. However, little is known about what makes these characters appear realistic. In this paper, we investigate human sensitivity to audio mismatches (i.e., when individuals' voices are not matched to their gestures) and visual desynchronization (i.e., when the body motions of the individuals in a group are mis-aligned in time) in virtual human conversers. Using motion capture data from a range of both polite conversations and arguments, we conduct a series of perceptual experiments and determine some factors that contribute to the plausibility of virtual conversing groups. We found that participants are more sensitive to visual desynchronization of body motions, than to mismatches between the characters' gestures and their voices. Furthermore, synthetic conversations can appear sufficiently realistic once there is an appropriate balance between talker and listener roles. This is regardless of body motion desynchronization or mismatched audio.