Using audio and video features to classify the most dominant person in a group meeting

  • Authors:
  • Hayley Hung;Dinesh Jayagopi;Chuohao Yeo;Gerald Friedland;Sileye Ba;Jean-Marc Odobez;Kannan Ramchandran;Nikki Mirghafori;Daniel Gatica-Perez

  • Affiliations:
  • IDIAP Research Institute, Martigny, Switzerland;IDIAP, Martigny, Switzerland & Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland;University of California, Berkeley;International Computer Science Institute (ICSI), Berkeley;IDIAP Research Institute, Martigny, Switzerland;IDIAP, Martigny, Switzerland & Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland;University of California, Berkeley;International Computer Science Institute (ICSI), Berkeley;IDIAP, Martigny, Switzerland & Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland

  • Venue:
  • Proceedings of the 15th international conference on Multimedia
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The automated extraction of semantically meaningful information from multi-modal data is becoming increasingly necessary due to the escalation of captured data for archival. A novel area of multi-modal data labelling, which has received relatively little attention, is the automatic estimation of the most dominant person in a group meeting. In this paper, we provide a framework for detecting dominance in group meetings using different audio and video cues. We show that by using a simple model for dominance estimation we can obtain promising results.