Reinforcement learning of predictive features in affordance perception

  • Authors:
  • Lucas Paletta;Gerald Fritz

  • Affiliations:
  • Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Graz, Austria;Joanneum Research Forschungsgesellschaft mbH, Institute of Digital Image Processing, Computational Perception Group, Graz, Austria

  • Venue:
  • Proceedings of the 2006 international conference on Towards affordance-based robot control
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, the aspect of visual perception has been explored in the context of Gibson's concept of affordances [1] in various ways [4-9]. In extension to existing functional views on visual feature representations, we focus on the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents. Furthermore, we propose that the originally defined representational concept for the perception of affordances - in terms of using either optical flow or heuristically determined 3D features of perceptual entities - should be generalized towards using arbitrary visual feature representations. In this context we demonstrate the learning of causal relationships between visual cues and associated anticipated interactions, using visual information within the framework of Markov Decision Processes (MDPs). We emphasize a new framework for cueing and recognition of affordance-like visual entities that could play an important role in future robot control architectures. Affordance-like perception should enable systems to react to environment stimuli both more efficiently and autonomously, and provide a potential to plan on the basis of relevant responses to more complex perceptual configurations. We verify the concept with a concrete implementation of learning visual cues by reinforcement, applying state-of-the-art visual descriptors and regions of interest that were extracted from a simulated robot scenario and prove that these features were successfully selected for their relevance in predicting opportunities of robot interaction.