Velocity Adaptation of Space-Time Interest Points

  • Authors:
  • Ivan Laptev;Tony Lindeberg

  • Affiliations:
  • Computational Vision and Active Perception Laboratory (CVAP), Sweden;Computational Vision and Active Perception Laboratory (CVAP), Sweden

  • Venue:
  • ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 1 - Volume 01
  • Year:
  • 2004

Quantified Score

Hi-index 0.01

Visualization

Abstract

The notion of local features in space-time has recently been proposed to capture and describe local events in video. When computing space-time descriptors, however, the result may strongly depend on the relative motion between the object and the camera. To compensate for this variation, we present a method that automatically adapts the features to the local velocity of the image pattern and, hence, results in a video representation that is stable with respect to different amounts of camera motion. Experimentally we show that the use of velocity adaptation substantially increases the repeatability of interest points as well as the stability of their associated descriptors. Moreover, for an application to human action recognition we demonstrate how velocity-adapted features enable recognition of human actions in situations with unknown camera motion and complex, non-stationary backgrounds.