Emotional states control for on-line game avatars

  • Authors:
  • Ce Zhan;Wanqing Li;Farzad Safaei;Philip Ogunbona

  • Affiliations:
  • University of Wollongong, Wollongong, NSW, Australia;University of Wollongong, Wollongong, NSW, Australia;University of Wollongong, Wollongong, NSW, Australia;University of Wollongong, Wollongong, NSW, Australia

  • Venue:
  • Proceedings of the 6th ACM SIGCOMM workshop on Network and system support for games
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although detailed animation has already been achieved in a number of Multi-player On-line Games (MOGs), players have to use text commands to control emotional states of avatars. Some systems have been proposed to implement a real-time automatic system facial expression recognition of players. Such systems can then be used to control avatars emotional states by driving the MOG's "animation engine" instead of text commands. Some of the challenges of such systems is the ability to detect and recognize facial components from low spatial resolution face images. In this paper a system based on an improved face detection method of Viola and Jones is proposed to serve the MOGs better. In addition a robust coarse-to-fine facial landmark localization method is proposed. The proposed system is evaluated by testing it on a database different from the training database and achieved 83% recognition rate for 4 emotional state expressions. The system is able to operate over a wider range of distance from user to camera.