Automated Generation of Emotive Virtual Humans

  • Authors:
  • Joon Hao Chuah;Brent Rossen;Benjamin Lok

  • Affiliations:
  • CISE, University of Florida, Gainesville, USA 32611;CISE, University of Florida, Gainesville, USA 32611;CISE, University of Florida, Gainesville, USA 32611

  • Venue:
  • IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Emotive virtual humans (VHs) are important for affective interactions with embodied conversation agents [1]. However, emotive VHs require significant resources and time. As an example, the VHs in movies and video games require teams of animators and months of work. VHs can also be imbued with emotion using appraisal theory methods that use psychology based models to generate emotions by using the VH's goals and beliefs to evaluate external events. These external events require manual tagging or natural language understanding [2]. As an alternative approach, we propose tagging VH responses with emotions using textual affect sensing methods. The method developed by Neviarouskaya et al. [3] uses syntactic parses and a database of words and associated emotion intensities.We use this database, and because these emotions are associated with specific words, we can combine the emotions with audio timing information to generate lip-synched facial expressions. Our approach, AutoEmotion, allows us to automatically add basic emotions to VHs without the need for manual animation or tagging or natural language understanding.