Speech Communication - Special issue on auditory-visual speech processing
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Hierarchical Model for Real Time Simulation of Virtual Human Crowds
IEEE Transactions on Visualization and Computer Graphics
Controlling steering behavior for small groups of pedestrians in virtual urban environments
Controlling steering behavior for small groups of pedestrians in virtual urban environments
A decision network framework for the behavioral animation of virtual humans
SCA '07 Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation
Graphical Models
Dynamic movement and positioning of embodied agents in multiparty conversations
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Gesture modeling and animation based on a probabilistic re-creation of speaker style
ACM Transactions on Graphics (TOG)
Clone attack! Perception of crowd variety
ACM SIGGRAPH 2008 papers
SmartBody: behavior realization for embodied conversational agents
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
A Computational Model of Culture-Specific Conversational Behavior
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Social Perception and Steering for Online Avatars
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Virtual Crowds: Methods, Simulation, and Control (Synthesis Lectures on Computer Graphics and Animation)
Real-time prosody-driven synthesis of body language
ACM SIGGRAPH Asia 2009 papers
Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Seeing is believing: body motion dominates in multisensory conversations
ACM SIGGRAPH 2010 papers
Situation agents: agent-based externalized steering logic
Computer Animation and Virtual Worlds - CASA' 2010 Special Issue
Constraints-based complex behavior in rich environments
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Smart events and primed agents
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
MPML3D: Scripting Agents for the 3D Internet
IEEE Transactions on Visualization and Computer Graphics
Towards a common framework for multimodal generation: the behavior markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Populating ancient pompeii with crowds of virtual romans
VAST'07 Proceedings of the 8th International conference on Virtual Reality, Archaeology and Intelligent Cultural Heritage
Hi-index | 0.00 |
Conversations between two people are ubiquitous in many inhabited contexts. The kinds of conversations that occur depend on several factors, including the time, the location of the participating agents, the spatial relationship between the agents, and the type of conversation in which they are engaged. The statistical distribution of dyadic conversations among a population of agents will therefore depend on these factors. In addition, the conversation types, flow, and duration will depend on agent attributes such as interpersonal relationships, emotional state, personal priorities, and socio-cultural proxemics. We present a framework for distributing conversations among virtual embodied agents in a real-time simulation. To avoid generating actual language dialogues, we express variations in the conversational flow by using behavior trees implementing a set of conversation archetypes. The flow of these behavior trees depends in part on the agents' attributes and progresses based on parametrically estimated transitional probabilities. With the participating agents' state, a ‘smart event’ model steers the interchange to different possible outcomes as it executes. Example behavior trees are developed for two conversation archetypes: buyer–seller negotiations and simple asking–answering; the model can be readily extended to others. Because the conversation archetype is known to participating agents, they can animate their gestures appropriate to their conversational state. The resulting animated conversations demonstrate reasonable variety and variability within the environmental context. Copyright © 2012 John Wiley & Sons, Ltd.