Perception and Developmental Learning of Affordances in Autonomous Robots

  • Authors:
  • Lucas Paletta;Gerald Fritz;Florian Kintzler;Jörg Irran;Georg Dorffner

  • Affiliations:
  • Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria;Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Wastiangasse 6, Graz, Austria;Österreichisches Forschungsinstitut für Artificial Intelligence (OFAI), Neural Computation and Robotics, Freyung 6, Vienna, Austria;Österreichisches Forschungsinstitut für Artificial Intelligence (OFAI), Neural Computation and Robotics, Freyung 6, Vienna, Austria;Österreichisches Forschungsinstitut für Artificial Intelligence (OFAI), Neural Computation and Robotics, Freyung 6, Vienna, Austria

  • Venue:
  • KI '07 Proceedings of the 30th annual German conference on Advances in Artificial Intelligence
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, the aspect of visual perception has been explored in the context of Gibson's concept of affordances [1] in various ways. We focus in this work on the importance of developmental learning and the perceptual cueing for an agent's anticipation of opportunities for interaction, in extension to functional views on visual feature representations. The concept for the incremental learning of abstract from basic affordances is presented in relation to learning of complex affordance features. In addition, the work proposes that the originally defined representational concept for the perception of affordances - in terms of using either motion or 3D cues - should be generalized towards using arbitrary visual feature representations. We demonstrate the learning of causal relations between visual cues and associated anticipated interactions by reinforcement learning of predictive perceptual states. We pursue a recently presented framework for cueing and recognition of affordance-based visual entities that obviously plays an important role in robot control architectures, in analogy to human perception. We experimentally verify the concept within a real world robot scenario by learning predictive visual cues using reinforcement signals, proving that features were selected for their relevance in predicting opportunities for interaction.