Human action recognition using boosted EigenActions
Image and Vision Computing
A survey on vision-based human action recognition
Image and Vision Computing
Challenges of human behavior understanding
HBU'10 Proceedings of the First international conference on Human behavior understanding
Object, scene and actions: combining multiple features for human action recognition
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
Human action recognition via multi-view learning
ICIMCS '10 Proceedings of the Second International Conference on Internet Multimedia Computing and Service
Action recognition in video by sparse representation on covariance manifolds of silhouette tunnels
ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
Abstract rendering of human activity in a dynamic distributed learning environment
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Bayesian filter based behavior recognition in workflows allowing for user feedback
Computer Vision and Image Understanding
Part-based motion descriptor image for human action recognition
Pattern Recognition
Human action recognition in videos using hybrid motion features
MMM'10 Proceedings of the 16th international conference on Advances in Multimedia Modeling
Higher rank Support Tensor Machines for visual recognition
Pattern Recognition
Semi-supervised action recognition in video via Labeled Kernel Sparse Coding and sparse L1 graph
Pattern Recognition Letters
HyDR-MI: A hybrid algorithm to reduce dimensionality in multiple instance learning
Information Sciences: an International Journal
Multiple-instance learning as a classifier combining problem
Pattern Recognition
Directional space-time oriented gradients for 3d visual pattern analysis
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part III
Categorizing turn-taking interactions
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part V
Motion interchange patterns for action recognition in unconstrained videos
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Reordering video shots for event classification using bag-of-words models and string kernels
Proceedings of the 27th Conference on Image and Vision Computing New Zealand
Exploring the Trade-off Between Accuracy and Observational Latency in Action Recognition
International Journal of Computer Vision
MI2LS: multi-instance learning from multiple informationsources
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
A review of motion analysis methods for human Nonverbal Communication Computing
Image and Vision Computing
Behavior recognition from video based on human constrained descriptor and adaptable neural networks
Proceedings of the 4th ACM/IEEE international workshop on Analysis and retrieval of tracked events and motion in imagery stream
Engineering Applications of Artificial Intelligence
Knowledge representation, learning, and problem solving for general intelligence
AGI'13 Proceedings of the 6th international conference on Artificial General Intelligence
Dynamic action recognition based on dynemes and Extreme Learning Machine
Pattern Recognition Letters
Max-Margin Early Event Detectors
International Journal of Computer Vision
A top-down event-driven approach for concurrent activity recognition
Multimedia Tools and Applications
Hi-index | 0.14 |
We propose a set of kinematic features that are derived from the optical flow for human action recognition in videos. The set of kinematic features includes divergence, vorticity, symmetric and antisymmetric flow fields, second and third principal invariants of flow gradient and rate of strain tensor, and third principal invariant of rate of rotation tensor. Each kinematic feature, when computed from the optical flow of a sequence of images, gives rise to a spatiotemporal pattern. It is then assumed that the representative dynamics of the optical flow are captured by these spatiotemporal patterns in the form of dominant kinematic trends or kinematic modes. These kinematic modes are computed by performing Principal Component Analysis (PCA) on the spatiotemporal volumes of the kinematic features. For classification, we propose the use of multiple instance learning (MIL) in which each action video is represented by a bag of kinematic modes. Each video is then embedded into a kinematic-mode-based feature space and the coordinates of the video in that space are used for classification using the nearest neighbor algorithm. The qualitative and quantitative results are reported on the benchmark data sets.