Programming pearls: algorithm design techniques
Communications of the ACM
Recognizing Human Actions: A Local SVM Approach
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
International Journal of Computer Vision
Robust Object Detection with Interleaved Categorization and Segmentation
International Journal of Computer Vision
Fast Action Detection via Discriminative Random Forest Voting and Top-K Subvolume Search
IEEE Transactions on Multimedia
Predicting human activities using spatio-temporal structure of interest points
Proceedings of the 20th ACM international conference on Multimedia
Combinational subsequence matching for human identification from general actions
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part III
Learning latent spatio-temporal compositional model for human action recognition
Proceedings of the 21st ACM international conference on Multimedia
Effective 3D action recognition using EigenJoints
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
Many existing techniques in content based video retrieval treat a video sequence as a whole to match it against a query video or to assign a text label. Such an approach has serious limitations when applied to human action retrieval because an action may occur only in a sub-region and last for a small portion of the video length. In situations like this, we essentially need to match the subvolumes of the video sequences against the query video. A naive exhaustive search is impractical due to large number of possible subvolumes for each video sequence. In this paper, we propose a novel framework for action retrieval which performs pattern matching at subvolume level and is very efficient in handling large corpus of videos. We construct an unsupervised random forest to index the video database, generate a score volume with Hough voting and then employ a max sub-path strategy to quickly search for the temporal and spatial positions of all the video sequences in the database. We present action search experiments on challenging datasets to validate the efficiency and effectiveness of our system.