Image registration by local approximation methods
Image and Vision Computing
A System for Learning Statistical Motion Patterns
IEEE Transactions on Pattern Analysis and Machine Intelligence
AVSS '08 Proceedings of the 2008 IEEE Fifth International Conference on Advanced Video and Signal Based Surveillance
IEEE Transactions on Knowledge and Data Engineering
Region covariance: a fast descriptor for detection and classification
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part II
International Journal of Computer Vision
Divergence measures based on the Shannon entropy
IEEE Transactions on Information Theory
Video Behaviour Mining Using a Dynamic Topic Model
International Journal of Computer Vision
A Survey of Vision-Based Trajectory Learning and Analysis for Surveillance
IEEE Transactions on Circuits and Systems for Video Technology
Similarity invariant classification of events by KL divergence minimization
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Exploiting sparse representations for robust analysis of noisy complex video scenes
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
A non-parametric hierarchical model to discover behavior dynamics from tracks
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Hi-index | 0.00 |
This paper proposes a novel framework for cross-domain traffic scene understanding. Existing learning-based outdoor wide-area scene interpretation models suffer from requiring long term data collection in order to acquire statistically sufficient model training samples for every new scene. This makes installation costly, prevents models from being easily relocated, and from being used in UAVs with continuously changing scenes. In contrast, our method adopts a geometrical matching approach to relate motion models learned from a database of source scenes (source domains) with a handful sparsely observed data in a new target scene (target domain). This framework is capable of online ''sparse-shot'' anomaly detection and motion event classification in the unseen target domain, without the need for extensive data collection, labelling and offline model training for each new target domain. That is, trained models in different source domains can be deployed to a new target domain with only a few unlabelled observations and without any training in the new target domain. Crucially, to provide cross-domain interpretation without risk of dramatic negative transfer, we introduce and formulate a scene association criterion to quantify transferability of motion models from one scene to another. Extensive experiments show the effectiveness of the proposed framework for cross-domain motion event classification, anomaly detection and scene association.