A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Context-Dependent Attention System for a Social Robot
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
A Behavioral Analysis of Computational Models of Visual Attention
International Journal of Computer Vision
From bottom-Up visual attention to robot action learning
DEVLRN '09 Proceedings of the 2009 IEEE 8th International Conference on Development and Learning
Computational Analysis of Motionese Toward Scaffolding Robot Action Learning
IEEE Transactions on Autonomous Mental Development
Hi-index | 0.00 |
This paper presents an architecture extending bottom-up visual attention for dynamic scene analysis. In dynamic scenes, particularly when learning actions from demonstrations, robots have to stably focus on the relevant movement by disregarding surrounding noises, but still maintain sensitivity to a new relevant movement, which might occur in the surroundings. In order to meet the contradictory requirements of stability and sensitivity for attention, this paper introduces biologically-inspired mechanisms for retinal filtering and stochastic attention selection. The former reduces the complexity of peripheral signals by filtering an input image. It results in enhancing bottom-up saliency in the fovea as well as in detecting only prominent signals from the periphery. The latter allows robots to shift attention to a less but still salient location in the periphery, which is likely relevant to the demonstrated action. Integrating these mechanisms with computation for bottom-up saliency enables robots to extract important action sequences from task demonstrations. Experiments with a simulated and a natural scene show better performance of the proposed model than comparative models.