Trajectory signature for action recognition in video

  • Authors:
  • Nicolas Ballas;Bertrand Delezoide;Françoise Prêteux

  • Affiliations:
  • CEA/Mines-ParisTech, Paris, France;CEA, Paris, France;Mines-ParisTech, Paris, France

  • Venue:
  • Proceedings of the 20th ACM international conference on Multimedia
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bag-of-Words representation based on trajectory local features and taking into account the spatio-temporal context through static segmentation grids is currently the leading paradigm to perform action annotation.While providing a coarse localization of low-level features, those approaches tend to be limited by the grid rigidity. In this work we propose two contributions on trajectory based signatures. First, we extend a local trajectory feature to characterize the acceleration in videos, leading to invariance to camera constant motion. We also introduce two new adaptive segmentation grids, namely Adaptive Grid (AG) and Deformable Adaptive Grid (DAG). AG is learnt from videos data, to fit a given dataset and overcome static grid rigidity. DAG is also learnt from video data. Moreover, it can be adapted to a specific video through a deformation operation. Our adaptive grids are then exploited by a Bag-of-Words model at the aggregation step for action recognition. Our proposal is evaluated on 4 publicly available datasets.