Facial animation for real-time conversing groups

  • Authors:
  • Rachel McDonnell

  • Affiliations:
  • Trinity College Dublin

  • Venue:
  • Proceedings of the SSPNET 2nd International Symposium on Facial Analysis and Animation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In recent years, many developments have been made in real-time crowds, with popular video games such as Ubisoft's Assassins Creed investing significant resources into their crowds. Research has focused heavily on agent locomotion, bur rarely have there been efforts to integrate realistic groups of stationary humans. However, groups of idle people conversing are important for the realistic depiction of a crowded scene. Using motion captured data of real conversations in order to create these groups produces realistic results. However, if motion data or storage is limited, this results in many duplicated conversations, which might appear unrealistic. In [Ennis et al. 2010], we examined the circumstances under which combining and reusing segments of recorded conversations would appear realistic to the observer. The results of our experiments allowed us to integrate varied groups of conversing characters into our crowd. Until now our characters were animated with conversational body-motion alone (Figure 1). The lack of facial animation and expressions can be disturbing when the conversing characters are focussed on by the viewer. In this ongoing work, we aim to increase the plausibility of our conversing groups by adding Level Of Detail (LOD) facial animation, while maintaining interactive frame-rates.