Computational study of human communication dynamic

  • Authors:
  • Louis-Philippe Morency

  • Affiliations:
  • University of Southern California, Los Angeles, CA, USA

  • Venue:
  • J-HGBU '11 Proceedings of the 2011 joint ACM workshop on Human gesture and behavior understanding
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Face-to-face communication is a highly dynamic process where participants mutually exchange and interpret linguistic and gestural signals. Even when only one person speaks at the time, other participants exchange information continuously amongst themselves and with the speaker through gesture, gaze, posture and facial expressions. To correctly interpret the high-level communicative signals, an observer needs to jointly integrate all spoken words, subtle prosodic changes and simultaneous gestures from all participants. In this paper, we present our ongoing research effort at USC MultiComp Lab to create models of human communication dynamic that explicitly take into consideration the multimodal and interpersonal aspects of human face-to-face interactions. The computational framework presented in this paper has wide applicability, including the recognition of human social behaviors, the synthesis of natural animations for robots and virtual humans, improved multimedia content analysis, and the diagnosis of social and behavioral disorders (e.g., autism spectrum disorder).