Spatio-Temporal phrases for activity recognition

  • Authors:
  • Yimeng Zhang;Xiaoming Liu;Ming-Ching Chang;Weina Ge;Tsuhan Chen

  • Affiliations:
  • School of Electrical and Computer Engineering, Cornell University;GE Global Research Center, 1 Research Circle, Niskayuna, NY;GE Global Research Center, 1 Research Circle, Niskayuna, NY;GE Global Research Center, 1 Research Circle, Niskayuna, NY;School of Electrical and Computer Engineering, Cornell University

  • Venue:
  • ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part III
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The local feature based approaches have become popular for activity recognition. A local feature captures the local movement and appearance of a local region in a video, and thus can be ambiguous; e.g., it cannot tell whether a movement is from a person's hand or foot, when the camera is far away from the person. To better distinguish different types of activities, people have proposed using the combination of local features to encode the relationships of local movements. Due to the computation limit, previous work only creates a combination from neighboring features in space and/or time. In this paper, we propose an approach that efficiently identifies both local and long-range motion interactions; taking the "push" activity as an example, our approach can capture the combination of the hand movement of one person and the foot response of another person, the local features of which are both spatially and temporally far away from each other. Our computational complexity is in linear time to the number of local features in a video. The extensive experiments show that our approach is generically effective for recognizing a wide variety of activities and activities spanning a long term, compared to a number of state-of-the-art methods.