Technical Note: \cal Q-Learning
Machine Learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Function-based generic recognition for multiple object categories
CVGIP: Image Understanding
Recognition by functional parts
Computer Vision and Image Understanding - Special issue of funtion-based vision
Interactive recognition and representation of functionality
Computer Vision and Image Understanding - Special issue of funtion-based vision
Modeling parietal-premotor interactions in primate control of grasping
Neural Networks - Special issue on neural control and robotics: biology and technology
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Learning Prospective Pick and Place Behavior
ICDL '02 Proceedings of the 2nd International Conference on Development and Learning
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Q-learning of sequential attention for visual object recognition from informative local descriptors
ICML '05 Proceedings of the 22nd international conference on Machine learning
Visual learning of affordance based cues
SAB'06 Proceedings of the 9th international conference on From Animals to Animats: simulation of Adaptive Behavior
Reinforcement learning in robotics: A survey
International Journal of Robotics Research
Synergy-based affordance learning for robotic grasping
Robotics and Autonomous Systems
Hi-index | 0.00 |
Recently, the aspect of visual perception has been explored in the context of Gibson's concept of affordances [1] in various ways. We focus in this work on the importance of developmental learning and the perceptual cueing for an agent's anticipation of opportunities for interaction, in extension to functional views on visual feature representations. The concept for the incremental learning of abstract from basic affordances is presented in relation to learning of complex affordance features. In addition, the work proposes that the originally defined representational concept for the perception of affordances - in terms of using either motion or 3D cues - should be generalized towards using arbitrary visual feature representations. We demonstrate the learning of causal relations between visual cues and associated anticipated interactions by reinforcement learning of predictive perceptual states. We pursue a recently presented framework for cueing and recognition of affordance-based visual entities that obviously plays an important role in robot control architectures, in analogy to human perception. We experimentally verify the concept within a real world robot scenario by learning predictive visual cues using reinforcement signals, proving that features were selected for their relevance in predicting opportunities for interaction.