Modeling dominance in group conversations using nonverbal activity cues

  • Authors:
  • Dinesh Babu Jayagopi;Hayley Hung;Chuohao Yeo;Daniel Gatica-Perez

  • Affiliations:
  • IDIAP Research Institute, Martigny, Switzerland and Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland;IDIAP Research Institute, Martigny, Switzerland;Department of Computer Science, University of California, Berkeley, CA;IDIAP Research Institute, Martigny, Switzerland and Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland

  • Venue:
  • IEEE Transactions on Audio, Speech, and Language Processing - Special issue on multimodal processing in speech-based interactions
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Dominance--a behavioral expression of power--is a fundamental mechanism of social interaction, expressed and perceived in conversations through spoken words and audio-visual nonverbal cues. The automatic modeling of dominance patterns from sensor data represents a relevant problem in social computing. In this paper, we present a systematic study on dominance modeling in group meetings from fully automatic nonverbal activity cues, in a multi-camera, multimicrophone setting. We investigate efficient audio and visual activity cues for the characterization of dominant behavior, analyzing single and joint modalities. Unsupervised and supervised approaches for dominance modeling are also investigated. Activity cues and models are objectively evaluated on a set of dominance-related classification tasks, derived from an analysis of the variability of human judgment of perceived dominance in group discussions. Our investigation highlights the power of relatively simple yet efficient approaches and the challenges of audiovisual integration. This constitutes the most detailed study on automatic dominance modeling in meetings to date.