Implementation of a scripting language for VRML/X3D-based embodied agents
Web3D '03 Proceedings of the eighth international conference on 3D Web technology
The semantic of episodes in communication with the anthropomorphic interface agent MAX
Proceedings of the 9th international conference on Intelligent user interfaces
Human computing, virtual humans and artificial imperfection
Proceedings of the 8th international conference on Multimodal interfaces
Mutually Coordinated Anticipatory Multimodal Interaction
Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction
On the Parametrization of Clapping
Gesture-Based Human-Computer Interaction and Simulation
Interactive Demonstration of Pointing Gestures for Virtual Trainers
Proceedings of the 13th International Conference on Human-Computer Interaction. Part II: Novel Interaction Methods and Techniques
Interactive motion modeling and parameterization by direct demonstration
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Challenges for virtual humans in human computing
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Architecture of a framework for generic assisting conversational agents
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
Virtual conversational agents are supposed to combine speech with nonverbal modalities for intelligible and believeable utterances. However, the automatic synthesi of coverbal gestures still struggles with several problems like naturalness in procedurally generated animations, flexibility in pre-defined movements, and synchronization with speech. In thi paper, we focus on generating complex multimodal utterances including gesture and speech from XML-based descriptions of their overt form. We describe a coordination model that reproduces co-arcticulation and transition effects in both modalities. In particular, an efficient kinematic approach to creating gesture animations from shape specifications is presented, which provides fine adaptation to temporal constraint that are imposed by cross-modal synchrony.