Human nonverbal behavior multi-sourced ontological annotation

  • Authors:
  • Boris Knyazev

  • Affiliations:
  • Bauman Moscow State Technical University, Moscow, Russia

  • Venue:
  • Proceedings of the International Workshop on Video and Image Ground Truth in Computer Vision Applications
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we introduce the current results of an ongoing three-year research and development project on automatic annotation of human nonverbal behavior. The present output of the project is a tool that provides algorithms and graphical user interface for the generation of ground-truth data about the subset of facial and body activities. These data are essential for the experts who are committed to unraveling the complexity of the linkage between the psychophysiological state and the nonverbal behavior of a human. Our work relied on a Kinect sensor, which computes depth maps together with the coordinates of the body joints and facial points. Local binary patterns are then extracted from the regions of interests of a facial video, which are either spatio-temporally aligned with the depth maps or calculated using the Active Shape Model. Another key idea of the proposed tool is that the extracted feature vector is semantically associated with ontological concepts in perspective providing annotations for most of the nonverbal activities.