BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Synthesizing multimodal utterances for conversational agents: Research Articles
Computer Animation and Virtual Worlds
Coordination and context-dependence in the generation of embodied conversation
INLG '00 Proceedings of the first international conference on Natural language generation - Volume 14
Gesture modeling and animation based on a probabilistic re-creation of speaker style
ACM Transactions on Graphics (TOG)
Design and evaluation of an American Sign Language generator
EmbodiedNLP '07 Proceedings of the Workshop on Embodied Language Processing
GNetIc --- Using Bayesian Decision Networks for Iconic Gesture Generation
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Requirements and building blocks for sociable embodied agents
KI'09 Proceedings of the 32nd annual German conference on Advances in artificial intelligence
Adaptive expressiveness: virtual conversational agents that can align to their interaction partner
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Function and form of gestures in a collaborative design meeting
GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction
Systematicity and idiosyncrasy in iconic gesture use: empirical analysis and computational modeling
GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction
To beat or not to beat: beat gestures in direction giving
GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
Joint activity theory as a framework for natural body expression in autonomous agents
Proceedings of the 1st International Workshop on Multimodal Learning Analytics
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Virtual character performance from speech
Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Towards higher quality character performance in previz
Proceedings of the Symposium on Digital Production
Giving interaction a hand: deep models of co-speech gesture in multimodal systems
Proceedings of the 15th ACM on International conference on multimodal interaction
Guest Editorial: Gesture and speech in interaction: An overview
Speech Communication
Hi-index | 0.00 |
Embodied conversational agents are required to be able to express themselves convincingly and autonomously. Based on an empirial study on spatial descriptions of landmarks in direction-giving, we present a model that allows virtual agents to automatically generate, i.e., select the content and derive the form of coordinated language and iconic gestures. Our model simulates the interplay between these two modes of expressiveness on two levels. First, two kinds of knowledge representation (propositional and imagistic) are utilized to capture the modality-specific contents and processes of content planning. Second, specific planners are integrated to carry out the formulation of concrete verbal and gestural behavior. A probabilistic approach to gesture formulation is presented that incorporates multiple contextual factors as well as idiosyncratic patterns in the mapping of visuo-spatial referent properties onto gesture morphology. Results from a prototype implementation are described.