Goal emulation and planning in perceptual space using learned affordances

  • Authors:
  • Emre Ugur;Erhan Oztop;Erol Sahin

  • Affiliations:
  • NICT, Biological ICT Group, Kyoto, Japan and ATR, Cognitive Mechanisms Labs., Kyoto, Japan and Middle East Technical University, Department of Computer Engineering, KOVAN Research Lab., Ankara, Tu ...;NICT, Biological ICT Group, Kyoto, Japan and ATR, Cognitive Mechanisms Labs., Kyoto, Japan and Osaka University, School of Engineering Science, Osaka, Japan;Middle East Technical University, Department of Computer Engineering, KOVAN Research Lab., Ankara, Turkey

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we show that through self-interaction and self-observation, an anthropomorphic robot equipped with a range camera can learn object affordances and use this knowledge for planning. In the first step of learning, the robot discovers commonalities in its action-effect experiences by discovering effect categories. Once the effect categories are discovered, in the second step, affordance predictors for each behavior are obtained by learning the mapping from the object features to the effect categories. After learning, the robot can make plans to achieve desired goals, emulate end states of demonstrated actions, monitor the plan execution and take corrective actions using the perceptual structures employed or discovered during learning. We argue that the learning system proposed shares crucial elements with the development of infants of 7-10 months age, who explore the environment and learn the dynamics of the objects through goal-free exploration. In addition, we discuss goal emulation and planning in relation to older infants with no symbolic inference capability and non-linguistic animals which utilize object affordances to make action plans.