Multimodal authoring tool for populating a database of emotional reactive animations

  • Authors:
  • Alejandra García-Rojas;Mario Gutiérrez;Daniel Thalmann;Frédéric Vexo

  • Affiliations:
  • Virtual Reality Laboratory (VRlab), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland;Virtual Reality Laboratory (VRlab), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland;Virtual Reality Laboratory (VRlab), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland;Virtual Reality Laboratory (VRlab), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland

  • Venue:
  • MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We aim to create a model of emotional reactive virtual humans. This model will help to define realistic behavior for virtual characters based on emotions and events in the Virtual Environment to which they react. A large set of pre-recorded animations will be used to obtain such model. We have defined a knowledge-based system to store animations of reflex movements taking into account personality and emotional state. Populating such a database is a complex task. In this paper we describe a multimodal authoring tool that provides a solution to this problem. Our multimodal tool makes use of motion capture equipment, a handheld device and a large projection screen.