Technical Note: \cal Q-Learning
Machine Learning
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Unsupervised Learning of Models for Recognition
ECCV '00 Proceedings of the 6th European Conference on Computer Vision-Part I
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Object Recognition Using Local Information Content
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2 - Volume 02
Rapid object recognition from discriminative regions of interest
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Hi-index | 0.00 |
This work proposes to learn visual encodings of attention patterns that enables sequential attention for object detection in real world environments. The system embeds a saccadic decision procedure in a cascaded process where visual evidence is probed at informative image locations. It is based on the extraction of information theoretic saliency by determining informative local image descriptors that provide selected foci of interest. The local information in terms of code book vector responses and the geometric information in the shift of attention contribute to recognition states of a Markov decision process. A Q-learner performs then performs search on useful actions towards salient locations, developing a strategy of action sequences directed in state space towards the optimization of information maximization. The method is evaluated in outdoor object recognition and demonstrates efficient performance.