Spatial Color Indexing and Applications
International Journal of Computer Vision
Supporting audiovisual query using dynamic programming
MULTIMEDIA '01 Proceedings of the ninth ACM international conference on Multimedia
Classification of summarized videos using hidden markov models on compressed chromaticity signatures
MULTIMEDIA '01 Proceedings of the ninth ACM international conference on Multimedia
On the detection of semantic concepts at TRECVID
Proceedings of the 12th annual ACM international conference on Multimedia
Statistical Analysis of Dynamic Actions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Motion pattern-based video classification and retrieval
EURASIP Journal on Applied Signal Processing
Multimedia event-based video indexing using time intervals
IEEE Transactions on Multimedia
Video event detection using motion relativity and visual relatedness
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Visual event detection using orientation histograms with feature point trajectory information
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Motion region-based trajectory analysis and re-ranking for video retrieval
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Learning automatic concept detectors from online video
Computer Vision and Image Understanding
Multimedia Tools and Applications
Proceedings of the ACM International Conference on Image and Video Retrieval
Event detection and recognition for semantic annotation of video
Multimedia Tools and Applications
A reward-and-punishment-based approach for concept detection using adaptive ontology rules
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Hi-index | 0.00 |
Among the various types of semantic concepts modeled, events pose the greatest challenge in terms of computational power needed to represent the event and accuracy that can be achieved in modeling it. We introduce a novel low-level visual feature that summarizes motion in a shot. This feature leverages motion vectors from MPEG-encoded video, and aggregates local motion vectors over time in a matrix, which we refer to as a motion image. The resulting motion image is representative of the overall motion in a video shot, having compressed the temporal dimension while preserving spatial ordering. Building motion models using this feature permits us to combine the power of discriminant modeling with the dynamics of the motion in video shots that cannot be accomplished by building generative models over a time series of motion features from multiple frames in the video shot. Evaluation of models built using several motion image features in the TRECVID 2005 dataset shows that use of this novel motion feature results an average improvement in concept detection performance by 140% over existing motion features. Furthermore, experiments also reveal that when this motion feature is combined with static feature representations of a single keyframe from the shot such as color and texture features, the fused detection results in an improvement between 4 to 12% over the fusion across the static features alone.