Detecting changes in signals and systems—a survey
Automatica (Journal of IFAC)
CVEPS - a compressed video editing and parsing system
MULTIMEDIA '96 Proceedings of the fourth ACM international conference on Multimedia
Evolving video skims into useful multimedia abstractions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Dynamic video summarization and visualization
MULTIMEDIA '99 Proceedings of the seventh ACM international conference on Multimedia (Part 2)
A user attention model for video summarization
Proceedings of the tenth ACM international conference on Multimedia
Motion pattern-based video classification and retrieval
EURASIP Journal on Applied Signal Processing
Statistical models of video structure for content analysis and characterization
IEEE Transactions on Image Processing
IEEE Transactions on Image Processing
A unified approach to shot change detection and camera motion characterization
IEEE Transactions on Circuits and Systems for Video Technology
Video summarisation for surveillance and news domain
SAMT'07 Proceedings of the semantic and digital media technologies 2nd international conference on Semantic Multimedia
Robust GME in encoded MPEG video
Proceedings of the 9th International Conference on Advances in Mobile Computing and Multimedia
Video summarization: techniques and classification
ICCVG'12 Proceedings of the 2012 international conference on Computer Vision and Graphics
Hi-index | 0.00 |
We present a method for motion-based video segmentation and segment classification as a step towards video summarization. The sequential segmentation of the video is performed by detecting changes in the dominant image motion, assumed to be related to camera motion and represented by a 2D affine model. The detection is achieved by analysing the temporal variations of some coefficients of the 2D affine model (robustly) estimated. The obtained video segments supply reasonable temporal units to be further classified. For the second stage, we adopt a statistical representation of the residual motion content of the video scene, relying on the distribution of temporal co-occurrences of local motion-related measurements. Pre-identified classes of dynamic events are learned off-line from a training set of video samples of the genre of interest. Each video segment is then classified according to a Maximum Likelihood criterion. Finally, excerpts of the relevant classes can be selected for video summarization. Experiments regarding the two steps of the method are presented on different video genres leading to very encouraging results while only low-level motion information is considered.