The role of emotion in believable agents
Communications of the ACM
Can computer personalities be human personalities?
International Journal of Human-Computer Studies
Increasing believability in animated pedagogical agents
AGENTS '97 Proceedings of the first international conference on Autonomous agents
Émile: Marshalling passions in training and education
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
Integrating models of personality and emotions into lifelike characters
Affective interactions
A multilayer personality model
Proceedings of the 2nd international symposium on Smart graphics
MPEG-4 Facial Animation: The Standard,Implementation and Applications
MPEG-4 Facial Animation: The Standard,Implementation and Applications
Personality Parameters and Programs
Creating Personalities for Synthetic Actors, Towards Autonomous Personality Agents
Scripting the Bodies and Minds of Life-Like Characters
PRICAI '02 Proceedings of the 7th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence
ParleE: An Adaptive Plan Based Event Appraisal Model of Emotions
KI '02 Proceedings of the 25th Annual German Conference on AI: Advances in Artificial Intelligence
VHML - Directing a Talking Head
AMT '01 Proceedings of the 6th International Computer Science Conference on Active Media Technology
Believable and Interactive Talking Heads for Websites: MetaFace and MPEG-4
AMT '01 Proceedings of the 6th International Computer Science Conference on Active Media Technology
Generic personality and emotion simulation for conversational agents: Research Articles
Computer Animation and Virtual Worlds
Modeling emotions and other motivations in synthetic agents
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
Curtin University’s Talking Heads (TH) combine an MPEG-4 compliant Facial Animation Engine (FAE), a Text To Emotional Speech Synthesiser (TTES), and a multi-modal Dialogue Manager (DM), that accesses a Knowledge Base (KB) and outputs Virtual Human Markup Language (VHML) text which drives the TTES and FAE. A user enters a question and an animated TH responds with a believable and affective voice and actions. However, this response to the user is normally marked up in VHML by the KB developer to produce the required facial gestures and emotional display. A real person does not react by fixed rules but on personality, beliefs, good and bad previous experiences, and training. This paper reviews personality theories and models relevant to THs, and then discusses the research at Curtin over the last five years in implementing and evaluating personality models. Finally the paper proposes an active, adaptive personality model to unify that work.