Learning the distribution of object trajectories for event recognition
BMVC '95 Proceedings of the 6th British conference on Machine vision (Vol. 2)
A State-Based Approach to the Representation and Recognition of Gesture
IEEE Transactions on Pattern Analysis and Machine Intelligence
W4: Real-Time Surveillance of People and Their Activities
IEEE Transactions on Pattern Analysis and Machine Intelligence
Application of the Self-Organizing Map to Trajectory Classification
VS '00 Proceedings of the Third IEEE International Workshop on Visual Surveillance (VS'2000)
Mean Shift Based Clustering in High Dimensions: A Texture Classification Example
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Recognition of Group Activities using Dynamic Probabilistic Networks
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Proceedings of the third ACM international workshop on Video surveillance & sensor networks
A System for Learning Statistical Motion Patterns
IEEE Transactions on Pattern Analysis and Machine Intelligence
On-line trajectory clustering for anomalous events detection
Pattern Recognition Letters
AVSS '08 Proceedings of the 2008 IEEE Fifth International Conference on Advanced Video and Signal Based Surveillance
Detection of abnormal behaviors using a mixture of Von Mises distributions
AVSS '07 Proceedings of the 2007 IEEE Conference on Advanced Video and Signal Based Surveillance
A novel sequence representation for unsupervised analysis of human activities
Artificial Intelligence
Multisensor Fusion for Monitoring Elderly Activities at Home
AVSS '09 Proceedings of the 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance
Hi-index | 0.00 |
This work proposes a complete framework for human activity discovery, modeling, and recognition using videos. The framework uses trajectory information as input and goes up to video interpretation. The work reduces the gap between low-level vision information and semantic interpretation, by building an intermediate layer composed of Primitive Events. The proposed representation for primitive events aims at capturing meaningful motions (actions) over the scene with the advantage of being learned in an unsupervised manner. We propose the use of Primitive Events as descriptors to discover, model, and recognize activities automatically. The activity discovery is performed using only real tracking data. Semantics are added to the discovered activities (e.g., "Preparing Meal", "Eating") and the recognition of activities is performed with new datasets.