Synthesizing multimodal utterances for conversational agents: Research Articles
Computer Animation and Virtual Worlds
SmartBody: behavior realization for embodied conversational agents
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Mutually Coordinated Anticipatory Multimodal Interaction
Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction
REAL-TIME ANIMATION OF INTERACTIVE AGENTS: SPECIFICATION AND REALIZATION
Applied Artificial Intelligence - Intelligent Virtual Agents
Multimodal behavior realization for embodied conversational agents
Multimedia Tools and Applications
Demonstrating and testing the BML compliance of BML realizers
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Multimodal plan representation for adaptable BML scheduling
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Towards a common framework for multimodal generation: the behavior markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
Human conversations are highly dynamic, responsive interactions. To enter into flexible interactions with humans, a conversational agent must be capable of fluent incremental behavior generation. New utterance content must be integrated seamlessly with ongoing behavior, requiring dynamic application of co-articulation. The timing and shape of the agent's behavior must be adapted on-the-fly to the interlocutor, resulting in natural interpersonal coordination. We present AsapRealizer, a BML 1.0 behavior realizer that achieves these capabilities by building upon, and extending, two state of the art existing realizers, as the result of a collaboration between two research groups.