BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Hierarchical Model for Real Time Simulation of Virtual Human Crowds
IEEE Transactions on Visualization and Computer Graphics
Enriching a motion collection by transplanting limbs
SCA '04 Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation
Clone attack! Perception of crowd variety
ACM SIGGRAPH 2008 papers
Perceptual evaluation of position and orientation context rules for pedestrian formations
Proceedings of the 5th symposium on Applied perception in graphics and visualization
Modeling Groups of Plausible Virtual Pedestrians
IEEE Computer Graphics and Applications
Populating ancient pompeii with crowds of virtual romans
VAST'07 Proceedings of the 8th International conference on Virtual Reality, Archaeology and Intelligent Cultural Heritage
Real-time shader rendering for crowds in virtual heritage
VAST'05 Proceedings of the 6th International conference on Virtual Reality, Archaeology and Intelligent Cultural Heritage
Variety Is the Spice of (Virtual) Life
MIG '09 Proceedings of the 2nd International Workshop on Motion in Games
Seeing is believing: body motion dominates in multisensory conversations
ACM SIGGRAPH 2010 papers
Movements and voices affect perceived sex of virtual conversers
Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization
Simulating believable crowd and group behaviors
ACM SIGGRAPH ASIA 2010 Courses
How responsiveness affects players' perception in digital games
Proceedings of the ACM Symposium on Applied Perception
Online behavior evaluation with the switching wizard of oz
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
In this article, we investigate human sensitivity to the coordination and timing of conversational body language for virtual characters. First, we captured the full body motions (excluding faces and hands) of three actors conversing about a range of topics, in either a polite (i.e., one person talking at a time) or debate/argument style. Stimuli were then created by applying the motion-captured conversations from the actors to virtual characters. In a 2AFC experiment, participants viewed paired sequences of synchronized and desynchronized conversations and were asked to guess which was the real one. Detection performance was above chance for both conversation styles but more so for the polite conversations, where desynchronization was more noticeable.