Modeling embodied feedback with virtual humans

  • Authors:
  • Stefan Kopp;Jens Allwood;Karl Grammer;Elisabeth Ahlsen;Thorsten Stocksmeier

  • Affiliations:
  • A.I. Group, Bielefeld University, Bielefeld, Germany;Dep. of Linguistics, Göteborg University, Göteborg, Sweden;Ludwig Boltzmann Inst. for Urban Ethology, Vienna, Austria;Dep. of Linguistics, Göteborg University, Göteborg, Sweden;A.I. Group, Bielefeld University, Bielefeld, Germany

  • Venue:
  • ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In natural communication, both speakers and listeners are active most of the time. While a speaker contributes new information, a listener gives feedback by producing unobtrusive (usually short) vocal or non-vocal bodily expressions to indicate whether he/she is able and willing to communicate, perceive, and understand the information, and what emotions and attitudes are triggered by this information. The simulation of feedback behavior for artificial conversational agents poses big challenges such as the concurrent and integrated perception and production of multi-modal and multifunctional expressions. We present an approach on modeling feedback for and with virtual humans, based on an approach to study "embodied feedback" as a special case of a more general theoretical account of embodied communication. A realization of this approach with the virtual human Max is described and results are presented.