Computing Visual Attention from Scene Depth
ICPR '00 Proceedings of the International Conference on Pattern Recognition - Volume 1
A Coherent Computational Approach to Model Bottom-Up Visual Attention
IEEE Transactions on Pattern Analysis and Machine Intelligence
Detecting Irregularities in Images and in Video
International Journal of Computer Vision
Relative Influence of Bottom-Up and Top-Down Attention
Attention in Cognitive Systems
Computer Vision and Image Understanding - Special issue: Attention and performance in computer vision
Two-frame motion estimation based on polynomial expansion
SCIA'03 Proceedings of the 13th Scandinavian conference on Image analysis
Saliency detection for stereoscopic video
Proceedings of the 4th ACM Multimedia Systems Conference
Hi-index | 0.00 |
This paper deals with the selection of relevant motion within a scene. The proposed method is based on 3D features extraction and their rarity quantification to compute bottom-up saliency maps. We show that the use of 3D motion features namely the motion direction and velocity is able to achieve much better results than the same algorithm using only 2D information. This is especially true in close scenes with small groups of people or moving objects and frontal view. The proposed algorithm uses motion features but it can be easily generalized to other dynamic or static features. It is implemented on a platform for real-time signal analysis called Max/Msp/Jitter. Social signal processing, video games, gesture processing and, in general, higher level scene understanding can benefit from this method.