Using self-context for multimodal detection of head nods in face-to-face interactions

  • Authors:
  • Laurent Nguyen;Jean-Marc Odobez;Daniel Gatica-Perez

  • Affiliations:
  • Idiap Research Institute, Martigny & Ecole Polytehnique Fédérale de Lausanne, Lausanne, Switzerland;Idiap Research Institute, Martigny & Ecole Polytehnique Fédérale de Lausanne, Lausanne, Switzerland;Idiap Research Institute, Martigny & Ecole Polytehnique Fédérale de Lausanne, Lausanne, Switzerland

  • Venue:
  • Proceedings of the 14th ACM international conference on Multimodal interaction
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Head nods occur in virtually every face-to-face discussion. As part of the backchannel domain, they are not only used to express a 'yes', but also to display interest or enhance communicative attention. Detecting head nods in natural interactions is a challenging task as head nods can be subtle, both in amplitude and duration. In this study, we make use of findings in psychology establishing that the dynamics of head gestures are conditioned on the person's speaking status. We develop a multimodal method using audio-based self-context to detect head nods in natural settings. We demonstrate that our multimodal approach using the speaking status of the person under analysis significantly improved the detection rate over a visual-only approach.