Exploiting generative models in discriminative classifiers
Proceedings of the 1998 conference on Advances in neural information processing systems II
Histograms of Oriented Gradients for Human Detection
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Correlative multilabel video annotation with temporal kernels
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Concept-Based Video Retrieval
Learning color names for real-world applications
IEEE Transactions on Image Processing
Improving the fisher kernel for large-scale image classification
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Multimedia event-based video indexing using time intervals
IEEE Transactions on Multimedia
Association and Temporal Rule Mining for Post-Filtering of Semantic Concept Detection in Video
IEEE Transactions on Multimedia
Action bank: A high-level representation of activity in video
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Motion interchange patterns for action recognition in unconstrained videos
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Recognizing 50 human action categories of web videos
Machine Vision and Applications
Evaluation of Color STIPs for Human Action Recognition
CVPR '13 Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition
Hi-index | 0.00 |
In video global features are often used for reasons of computational efficiency, where each global feature captures information of a single video frame. But frames in video change over time, so an important question is: how can we meaningfully aggregate frame-based features in order to preserve the variation in time? In this paper we propose to use the Fisher Kernel to capture variation in time in video. While in this approach the temporal order is lost, it captures both subtle variation in time such as the ones caused by a moving bicycle and drastic variations in time such as the changing of shots in a documentary. Our work should not be confused with a Bag of Local Visual Features approach, where one captures the visual variation of local features in both time and space indiscriminately. Instead, each feature measures a complete frame hence we capture variation in time only. We show that our framework is highly general, reporting improvements using frame-based visual features, body-part features, and audio features on three diverse datasets: We obtain state-of-the-art results on the UCF50 human action dataset and improve the state-of-the-art on the MediaEval 2012 video-genre benchmark and on the ADL daily activity recognition dataset.