A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
A user attention model for video summarization
Proceedings of the tenth ACM international conference on Multimedia
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
An Integrated Model of Top-Down and Bottom-Up Attention for Optimizing Detection Speed
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Video retargeting: automating pan and scan
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Visual attention detection in video sequences using spatiotemporal cues
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Seam carving for content-aware image resizing
ACM SIGGRAPH 2007 papers
Improved seam carving for video retargeting
ACM SIGGRAPH 2008 papers
An iterative image registration technique with an application to stereo vision
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
Saliency detection for content-aware image resizing
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Resizing by symmetry-summarization
ACM SIGGRAPH Asia 2010 papers
Efficient scale-space spatiotemporal saliency tracking for distortion-free video retargeting
ACCV'09 Proceedings of the 9th Asian conference on Computer Vision - Volume Part II
An Efficient Spatiotemporal Attention Model and Its Application to Shot Matching
IEEE Transactions on Circuits and Systems for Video Technology
Neurocomputing
Hi-index | 0.00 |
State of the art methods for video resizing usually produce perceivable visual discontinuities. Therefore, how to preserve the visual continuity in video frames is one of the most critical issues. In this paper, we propose a novel approach for modeling dynamic visual attention based on spatiotemporal analysis in order to detect the focus of interest automatically. The continuously varied co-sited blocks in a video cube are first detected and their variations are characterized as visual cubes, which are further employed to determine a proper extent of salient regions in video frames. Once the proper extent through video cubes is determined, the resizing process then can be conducted to find the global optimum. Our experiment shows that the proposed content-aware video resizing based on spatiotemporal visual cubes can effectively generate resized videos while keeping their isotropic manipulation and the continuous dynamics of visual perception.