Technical Note: \cal Q-Learning
Machine Learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Function-based generic recognition for multiple object categories
CVGIP: Image Understanding
Recognition by functional parts
Computer Vision and Image Understanding - Special issue of funtion-based vision
Interactive recognition and representation of functionality
Computer Vision and Image Understanding - Special issue of funtion-based vision
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Modeling Object Recognition as a Markov Decision Process
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume IV-Volume 7472 - Volume 7472
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Q-learning of sequential attention for visual object recognition from informative local descriptors
ICML '05 Proceedings of the 22nd international conference on Machine learning
Visual learning of affordance based cues
SAB'06 Proceedings of the 9th international conference on From Animals to Animats: simulation of Adaptive Behavior
The MACS project: an approach to affordance-inspired robot control
Proceedings of the 2006 international conference on Towards affordance-based robot control
Hi-index | 0.00 |
Recently, the aspect of visual perception has been explored in the context of Gibson's concept of affordances [1] in various ways [4-9]. In extension to existing functional views on visual feature representations, we focus on the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents. Furthermore, we propose that the originally defined representational concept for the perception of affordances - in terms of using either optical flow or heuristically determined 3D features of perceptual entities - should be generalized towards using arbitrary visual feature representations. In this context we demonstrate the learning of causal relationships between visual cues and associated anticipated interactions, using visual information within the framework of Markov Decision Processes (MDPs). We emphasize a new framework for cueing and recognition of affordance-like visual entities that could play an important role in future robot control architectures. Affordance-like perception should enable systems to react to environment stimuli both more efficiently and autonomously, and provide a potential to plan on the basis of relevant responses to more complex perceptual configurations. We verify the concept with a concrete implementation of learning visual cues by reinforcement, applying state-of-the-art visual descriptors and regions of interest that were extracted from a simulated robot scenario and prove that these features were successfully selected for their relevance in predicting opportunities of robot interaction.