An adaptive personality model for ECAs

  • Authors:
  • He Xiao;Donald Reid;Andrew Marriott;E. K. Gulland

  • Affiliations:
  • Curtin University of Technology, Bentley, Western Australia;Curtin University of Technology, Bentley, Western Australia;Curtin University of Technology, Bentley, Western Australia;Curtin University of Technology, Bentley, Western Australia

  • Venue:
  • ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Curtin University’s Talking Heads (TH) combine an MPEG-4 compliant Facial Animation Engine (FAE), a Text To Emotional Speech Synthesiser (TTES), and a multi-modal Dialogue Manager (DM), that accesses a Knowledge Base (KB) and outputs Virtual Human Markup Language (VHML) text which drives the TTES and FAE. A user enters a question and an animated TH responds with a believable and affective voice and actions. However, this response to the user is normally marked up in VHML by the KB developer to produce the required facial gestures and emotional display. A real person does not react by fixed rules but on personality, beliefs, good and bad previous experiences, and training. This paper reviews personality theories and models relevant to THs, and then discusses the research at Curtin over the last five years in implementing and evaluating personality models. Finally the paper proposes an active, adaptive personality model to unify that work.