Multispace behavioral model for face-based affective social agents

  • Authors:
  • Ali Arya;Steve DiPaola

  • Affiliations:
  • Carleton School of Information Technology, Carleton University, Ottawa, ON, Canada;School of Interactive Arts & Technology, Simon Fraser University, Surrey, BC, Canada

  • Venue:
  • Journal on Image and Video Processing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, and mood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios.