Personalized expressive embodied conversational agent EVA

  • Authors:
  • Izidor Mlakar;Matej Rojc

  • Affiliations:
  • Roboti c.s. d.o.o, University of Maribor, Slovenia;Faculty of Electrical Engineering and Computer Science, University of Maribor, Slovenia

  • Venue:
  • VIS '10 Proceedings of the 3rd WSEAS international conference on Visualization, imaging and simulation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper a new modular framework (EVA framework) and expressive embodied conversational agent EVA are presented. From talking heads to fully animatable bodies, and by techniques such as behavioral modeling and emotion modeling, researches are trying to present interaction interfaces providing as natural behavior as possible. The ECA EVA presented in this article is a mesh based multi-part model supporting both bone and morph target based animation. EVA can speak and perform body gestures such as facial gestures, gaze, hand gestures, etc. Each gesture is described by composition of movements of one or more base elements (bones and/or morphs), and can further be fine tuned by using time (speed) and space attributes (stress). Each gesture can therefore be unique, and either predefined ("offline" modeling) or described by xml based description ("online modeling"). The suggested multi-part model concept enables EVA to perform several gestures simultaneously and independently from each other (e.g. emotion blending, expressive speech). Additionally, the multi-part concept also enables for each of the model parts to be easily updated even whilst the EVA is being animated. The embodied conversational agent EVA, presented in this article, provides both personalization of its behavior (gesture level) and personalization of its outlook. EVA's animation engine is Panda 3D based and fully supports Python programming language.