A survey of hybrid MC/DPCM/DCT video coding distortions
Signal Processing - Special issue on image and video quality metrics
2006 Special Issue: Modeling attention to salient proto-objects
Neural Networks
Content-based attention ranking using visual and contextual attention model for baseball videos
IEEE Transactions on Multimedia - Special issue on integration of context and content
Visual sensitivity guided bit allocation for video coding
IEEE Transactions on Multimedia
Spatiotemporal Visual Considerations for Video Coding
IEEE Transactions on Multimedia
Fast and Robust Generation of Feature Maps for Region-Based Visual Attention
IEEE Transactions on Image Processing
Video Adaptation for Small Display Based on Content Recomposition
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
For the purpose of extracting attention regions from distorted videos, a distortion-weighing spatiotemporal visual attention model is proposed. On the impact of spatial and temporal saliency maps, visual attention regions are acquired directed in a bottom-up manner. Meanwhile, the blocking artifact saliency map is detected according to intensity gradient features. An attention selection is applied to identify one of visual attention regions with more relatively serious blocking artifact as the Focus of Attention (FOA) directed in a top-down manner. Experimental results show that the proposed model can not only accurately analyze the spatiotemporal saliency based on the intensity, the texture, and the motion features, but also able to estimate the blocking artifact of distortions in comparing with Walther's and You's models.