ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
A novel technique for indexing video surveillance data
IWVS '03 First ACM SIGMM international workshop on Video surveillance
Recognizing Human Actions: A Local SVM Approach
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
Skeletal Parameter Estimation from Optical Motion Capture Data
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Scaling and time warping in time series querying
VLDB '05 Proceedings of the 31st international conference on Very large data bases
Fast time series classification using numerosity reduction
ICML '06 Proceedings of the 23rd international conference on Machine learning
VLDB '06 Proceedings of the 32nd international conference on Very large data bases
Uses of accelerometer data collected from a wearable system
Personal and Ubiquitous Computing
Detecting time series motifs under uniform scaling
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Sublinear querying of realistic timeseries and its application to human motion
Proceedings of the international conference on Multimedia information retrieval
Hi-index | 0.00 |
We propose a novel algorithm to extract time series from video to characterize the type of motion embedded in the video. Our method relies on describing the motion exposed in a video as a collection of spatiotemporal gradients. Each gradient models high variation in the respective region of the video both in space and time with respect to its spatiotemporal neighborhood. Rather than obtaining a coarse sampling of the motion by taking one event per frame, we obtain a continuous function by considering all the events that fall in the short-time slicing window of time length equal to the value of the temporal variance. The result is a composed time series that represents the motion in the video independent of rotation and scale. As an empirical demonstration of the viability of our method, we are able to cluster human motions contained in 114 videos into hand-based motions and foot-based motions with the precision of 86.0% and 75.9% respectively.