A survey of advances in vision-based human motion capture and analysis
Computer Vision and Image Understanding - Special issue on modeling people: Vision-based understanding of a person's shape, appearance, movement, and behaviour
Learning, detection and representation of multi-agent events in videos
Artificial Intelligence
Semantic retrieval of events from indoor surveillance video databases
Pattern Recognition Letters
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
View-Invariant Human Action Recognition Using Exemplar-Based Hidden Markov Models
ICIRA '09 Proceedings of the 2nd International Conference on Intelligent Robotics and Applications
Analysis of multi-agent activity using petri nets
Pattern Recognition
Learning to recognize video-based spatiotemporal events
IEEE Transactions on Intelligent Transportation Systems
Advances in view-invariant human motion analysis: a review
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Detecting and discriminating behavioural anomalies
Pattern Recognition
Visual object-action recognition: Inferring object affordances from human demonstration
Computer Vision and Image Understanding
Probabilistic event calculus based on Markov logic networks
RuleML'11 Proceedings of the 5th international conference on Rule-based modeling and computing on the semantic web
Hi-index | 0.00 |
Graphical models are often used to represent and recognize activities. Purely unsupervised methods (such as HMMs) can be trained automatically but yield models whose internal structure - the nodes - are difficult to interpret semantically. Manually constructed networks typically have nodes corresponding to sub-events, but the programming and training of these networks is tedious and requires extensive domain expertise. In this paper, we propose a semi-supervised approach in which a manually structured, Propagation Network (a form of a DBN) is initialized from a small amount of fully annotated data, and then refined by an EM-based learning method in an unsupervised fashion. During node refinement (the M step) a boosting-based algorithm is employed to train the evidence detectors of individual nodes. Experiments on a variety of data types - vision and inertial measurements - in several tasks demonstrate the ability to learn from as little as one fully annotated example accompanied by a small number of positive but non-annotated training examples. The system is applied to both recognition and anomaly detection tasks.