The Design and Use of Steerable Filters
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic partitioning of full-motion video
Multimedia Systems
Modeling visual attention via selective tuning
Artificial Intelligence - Special volume on computer vision
The robust estimation of multiple motions: parametric and piecewise-smooth flow fields
Computer Vision and Image Understanding
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Bayesian Computer Vision System for Modeling Human Interactions
IEEE Transactions on Pattern Analysis and Machine Intelligence
A user attention model for video summarization
Proceedings of the tenth ACM international conference on Multimedia
Contextual Priming for Object Detection
International Journal of Computer Vision
Probabilistic Modeling of Local Appearance and Spatial Relationships for Object Recognition
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Background Modeling and Subtraction of Dynamic Scenes
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
A Principled Approach to Detecting Surprising Events in Video
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Efficient Visual Event Detection Using Volumetric Features
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
Detecting Irregularities in Images and in Video
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
An Evaluation of Motion in Arti.cial Selective Attention
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
Attention-Based Dynamic Visual Search Using Inner-Scene Similarity: Algorithms and Bounds
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical Analysis of Dynamic Actions
IEEE Transactions on Pattern Analysis and Machine Intelligence
2006 Special Issue: Modeling attention to salient proto-objects
Neural Networks
Behavior recognition via sparse spatio-temporal features
ICCCN '05 Proceedings of the 14th International Conference on Computer Communications and Networks
Is bottom-up attention useful for object recognition?
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
A generic framework of user attention model and its application in video summarization
IEEE Transactions on Multimedia
Automatic foveation for video compression using a neurobiological model of visual attention
IEEE Transactions on Image Processing
Space-time image sequence analysis: object tunnels and occlusion volumes
IEEE Transactions on Image Processing
Content analysis of video using principal components
IEEE Transactions on Circuits and Systems for Video Technology
The co-attention model for tiny activity analysis
Neurocomputing
Hi-index | 0.00 |
Computer vision applications often need to process only a representative part of the visual input rather than the whole image/sequence. Considerable research has been carried out into salient region detection methods based either on models emulating human visual attention (VA) mechanisms or on computational approximations. Most of the proposed methods are bottom-up and their major goal is to filter out redundant visual information. In this paper, we propose and elaborate on a saliency detection model that treats a video sequence as a spatiotemporal volume and generates a local saliency measure for each visual unit (voxel). This computation involves an optimization process incorporating inter- and intra-feature competition at the voxel level. Perceptual decomposition of the input, spatiotemporal center-surround interactions and the integration of heterogeneous feature conspicuity values are described and an experimental framework for video classification is set up. This framework consists of a series of experiments that shows the effect of saliency in classification performance and let us draw conclusions on how well the detected salient regions represent the visual input. A comparison is attempted that shows the potential of the proposed method.