Apprenticeship learning via inverse reinforcement learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
A Unified Framework for Tracking through Occlusions and across Sensor Gaps
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
ICML '06 Proceedings of the 23rd international conference on Machine learning
Floor Fields for Tracking in High Density Crowd Scenes
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part II
Robust Object Tracking by Hierarchical Association of Detection Responses
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part II
Maximum entropy inverse reinforcement learning
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Planning-based prediction for pedestrians
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Unsupervised learning of functional categories in video scenes
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
A Survey of Vision-Based Trajectory Learning and Analysis for Surveillance
IEEE Transactions on Circuits and Systems for Video Technology
Max-margin early event detectors
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Human activity prediction: Early recognition of ongoing activities from streaming videos
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Co-inference for multi-modal scene analysis
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Learning intentions for improved human motion prediction
Robotics and Autonomous Systems
Hi-index | 0.00 |
We address the task of inferring the future actions of people from noisy visual input. We denote this task activity forecasting. To achieve accurate activity forecasting, our approach models the effect of the physical environment on the choice of human actions. This is accomplished by the use of state-of-the-art semantic scene understanding combined with ideas from optimal control theory. Our unified model also integrates several other key elements of activity analysis, namely, destination forecasting, sequence smoothing and transfer learning. As proof-of-concept, we focus on the domain of trajectory-based activity analysis from visual input. Experimental results demonstrate that our model accurately predicts distributions over future actions of individuals. We show how the same techniques can improve the results of tracking algorithms by leveraging information about likely goals and trajectories.