A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Models of bottom-up and top-down visual attention
Models of bottom-up and top-down visual attention
A Principled Approach to Detecting Surprising Events in Video
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Robust subspace analysis for detecting visual attention regions in images
Proceedings of the 13th annual ACM international conference on Multimedia
Visual attention detection in video sequences using spatiotemporal cues
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
2006 Special Issue: Modeling attention to salient proto-objects
Neural Networks
Region-based visual attention analysis with its application in image browsing on small displays
Proceedings of the 15th international conference on Multimedia
Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos
International Journal of Computer Vision
A dataset and evaluation methodology for visual saliency in video
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Proceedings of the 29th DAGM conference on Pattern recognition
A generic framework of user attention model and its application in video summarization
IEEE Transactions on Multimedia
A novel cross-diamond search algorithm for fast block motion estimation
IEEE Transactions on Circuits and Systems for Video Technology
Salient object detection: a benchmark
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Static saliency vs. dynamic saliency: a comparative study
Proceedings of the 21st ACM international conference on Multimedia
Stochastic bottom-up fixation prediction and saccade generation
Image and Vision Computing
Visual Saliency with Statistical Priors
International Journal of Computer Vision
Hi-index | 0.00 |
In this paper, we present a probabilistic multi-task learning approach for visual saliency estimation in video. In our approach, the problem of visual saliency estimation is modeled by simultaneously considering the stimulus-driven and task-related factors in a probabilistic framework. In this framework, a stimulus-driven component simulates the low-level processes in human vision system using multi-scale wavelet decomposition and unbiased feature competition; while a task-related component simulates the high-level processes to bias the competition of the input features. Different from existing approaches, we propose a multi-task learning algorithm to learn the task-related "stimulus-saliency" mapping functions for each scene. The algorithm also learns various fusion strategies, which are used to integrate the stimulus-driven and task-related components to obtain the visual saliency. Extensive experiments were carried out on two public eye-fixation datasets and one regional saliency dataset. Experimental results show that our approach outperforms eight state-of-the-art approaches remarkably.