Visual learning of affordance based cues

  • Authors:
  • Gerald Fritz;Lucas Paletta;Manish Kumar;Georg Dorffner;Ralph Breithaupt;Erich Rome

  • Affiliations:
  • Institute of Digital Image Processing, Computational Perception Group, JOANNEUM RESEARCH Forschungsgesellschaft mbH, Graz, Austria;Institute of Digital Image Processing, Computational Perception Group, JOANNEUM RESEARCH Forschungsgesellschaft mbH, Graz, Austria;Institute of Digital Image Processing, Computational Perception Group, JOANNEUM RESEARCH Forschungsgesellschaft mbH, Graz, Austria;Österreichische Studiengesellschaft für Kybernetik, Neural Computation and Robotics, Vienna, Austria;Fraunhofer Institute for Autonomous Intelligent Systems, Robot Control Architectures, Sankt Augustin, Germany;Fraunhofer Institute for Autonomous Intelligent Systems, Robot Control Architectures, Sankt Augustin, Germany

  • Venue:
  • SAB'06 Proceedings of the 9th international conference on From Animals to Animats: simulation of Adaptive Behavior
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work is about the relevance of Gibson's concept of affordances [1] for visual perception in interactive and autonomous robotic systems In extension to existing functional views on visual feature representations, we identify the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents We investigate how the originally defined representational concept for the perception of affordances – in terms of using either optical flow or heuristically determined 3D features of perceptual entities – should be generalized to using arbitrary visual feature representations In this context we demonstrate the learning of causal relationships between visual cues and predictable interactions, using both 3D and 2D information In addition, we emphasize a new framework for cueing and recognition of affordance-like visual entities that could play an important role in future robot control architectures We argue that affordance-like perception should enable systems to react to environment stimuli both more efficient and autonomous, and provide a potential to plan on the basis of responses to more complex perceptual configurations We verify the concept with a concrete implementation applying state-of-the-art visual descriptors and regions of interest that were extracted from a simulated robot scenario and prove that these features were successfully selected for their relevance in predicting opportunities of robot interaction.