C4.5: programs for machine learning
C4.5: programs for machine learning
Function-based generic recognition for multiple object categories
CVGIP: Image Understanding
Recognition by functional parts
Computer Vision and Image Understanding - Special issue of funtion-based vision
Interactive recognition and representation of functionality
Computer Vision and Image Understanding - Special issue of funtion-based vision
Modeling parietal-premotor interactions in primate control of grasping
Neural Networks - Special issue on neural control and robotics: biology and technology
Learning Prospective Pick and Place Behavior
ICDL '02 Proceedings of the 2nd International Conference on Development and Learning
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
To Afford or Not to Afford: A New Formalization of Affordances Toward Affordance-Based Robot Control
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Perception and Developmental Learning of Affordances in Autonomous Robots
KI '07 Proceedings of the 30th annual German conference on Advances in Artificial Intelligence
A strategy for grasping unknown objects based on co-planarity and colour information
Robotics and Autonomous Systems
Reinforcement learning of predictive features in affordance perception
Proceedings of the 2006 international conference on Towards affordance-based robot control
The MACS project: an approach to affordance-inspired robot control
Proceedings of the 2006 international conference on Towards affordance-based robot control
Predicting affordances from gist
SAB'10 Proceedings of the 11th international conference on Simulation of adaptive behavior: from animals to animats
Goal emulation and planning in perceptual space using learned affordances
Robotics and Autonomous Systems
Hi-index | 0.00 |
This work is about the relevance of Gibson's concept of affordances [1] for visual perception in interactive and autonomous robotic systems In extension to existing functional views on visual feature representations, we identify the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents We investigate how the originally defined representational concept for the perception of affordances – in terms of using either optical flow or heuristically determined 3D features of perceptual entities – should be generalized to using arbitrary visual feature representations In this context we demonstrate the learning of causal relationships between visual cues and predictable interactions, using both 3D and 2D information In addition, we emphasize a new framework for cueing and recognition of affordance-like visual entities that could play an important role in future robot control architectures We argue that affordance-like perception should enable systems to react to environment stimuli both more efficient and autonomous, and provide a potential to plan on the basis of responses to more complex perceptual configurations We verify the concept with a concrete implementation applying state-of-the-art visual descriptors and regions of interest that were extracted from a simulated robot scenario and prove that these features were successfully selected for their relevance in predicting opportunities of robot interaction.