Recognizing Human Actions: A Local SVM Approach
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
International Journal of Computer Vision
Efficient Visual Event Detection Using Volumetric Features
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
A 3-dimensional sift descriptor and its application to action recognition
Proceedings of the 15th international conference on Multimedia
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words
International Journal of Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
A survey on vision-based human action recognition
Image and Vision Computing
An overview of contest on semantic description of human activities (SDHA) 2010
ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
Action recognition in video by sparse representation on covariance manifolds of silhouette tunnels
ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
Unsupervised learning of micro-action exemplars using a Product Manifold
AVSS '11 Proceedings of the 2011 8th IEEE International Conference on Advanced Video and Signal Based Surveillance
Advances in matrix manifolds for computer vision
Image and Vision Computing
Kernel analysis on Grassmann manifolds for action recognition
Pattern Recognition Letters
Hi-index | 0.00 |
This paper presents a method for unsupervised learning and recognition of human actions in video. Lacking any supervision, there is nothing except the inherent biases of a given representation to guide grouping of video clips along semantically meaningful partitions. Thus, in the first part of this paper, we compare two contemporary methods, Bag of Features (BOF) and Product Manifolds (PM), for clustering video clips of human facial expressions, hand gestures, and full-body actions, with the goal of better understanding how well these very different approaches to behavior recognition produce semantically relevant clustering of data. We show that PM yields superior results when measuring the alignment between the generated clusters and the nominal class labeling of the data set. We found that while gross motions were easily clustered by both methods, the lack of preservation of structural information inherent to the BOF representation leads to limitations that are not easily overcome without supervised training. This was evidenced by the poor separation of shape labels in the hand gestures data by BOF, and the overall poor performance on full-body actions. In the second part of this paper, we present an unsupervised mechanism for learning micro-actions in continuous video streams using the PM representation. Unlike other works, our method requires no prior knowledge of an expected number of labels/classes, requires no silhouette extraction, is tolerant to minor tracking errors and jitter, and can operate at near real-time speed. We show how to construct a set of training ''tracklets,'' how to cluster them using the Product Manifold distance measure, and how to perform detection using exemplars learned from the clusters. Further, we show that the system is amenable to incremental learning as anomalous activities are detected in the video stream. We demonstrate performance using the publicly-available ETHZ Livingroom data set.