The EMOTE model for effort and shape
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Methods for exploring expressive stance
Graphical Models - Special issue on SCA 2004
EMBR --- A Realtime Animation Engine for Interactive Embodied Agents
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Mechatronic design of NAO humanoid
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Proceedings of the 16th International Conference on 3D Web Technology
Towards a common framework for multimodal generation: the behavior markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Implementing expressive gesture synthesis for embodied conversational agents
GW'05 Proceedings of the 6th international conference on Gesture in Human-Computer Interaction and Simulation
Hi-index | 0.00 |
This paper presents an expressive gesture model that generates communicative gestures accompanying speech for the humanoid robot Nao. The research work focuses mainly on the expressivity of robot gestures being coordinated with speech. To reach this objective, we have extended and developed our existing virtual agent platform GRETA to be adapted to the robot. Gestural prototypes are described symbolically and stored in a gestural database, called lexicon. Given a set of intentions and emotional states to communicate the system selects from the robot lexicon corresponding gestures. After that the selected gestures are planned to synchronize speech and then instantiated in robot joint values while taking into account parameters of gestural expressivity such as temporal extension, spatial extension, fluidity, power and repetitivity. In this paper, we will provide a detailed overview of our proposed model.