Activities as time series of human postures

  • Authors:
  • William Brendel;Sinisa Todorovic

  • Affiliations:
  • Oregon State University, Kelley Engineering Center, Corvallis, OR;Oregon State University, Kelley Engineering Center, Corvallis, OR

  • Venue:
  • ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an exemplar-based approach to detecting and localizing human actions, such as running, cycling, and swinging, in realistic videos with dynamic backgrounds. We show that such activities can be compactly represented as time series of a few snapshots of human-body parts in their most discriminative postures, relative to other activity classes. This enables our approach to efficiently store multiple diverse exemplars per activity class, and quickly retrieve exemplars that best match the query by aligning their short time-series representations. Given a set of example videos of all activity classes, we extract multiscale regions from all their frames, and then learn a sparse dictionary of most discriminative regions. The Viterbi algorithm is then used to track detections of the learned codewords across frames of each video, resulting in their compact time-series representations. Dictionary learning is cast within the largemargin framework, wherein we study the effects of l1 and l2 regularization on the sparseness of the resulting dictionaries. Our experiments demonstrate robustness and scalability of our approach on challenging YouTube videos.