Learning visual representations for perception-action systems

  • Authors:
  • Justus Piater;Sébastien Jodogne;Renaud Detry;Dirk Kraft;Norbert Krüger;Oliver Kroemer;Jan Peters

  • Affiliations:
  • Department of Electrical Engineering and Computer Science,Grande Traverse, University of Liège, Liège - Sart Tilman Belgium,;EURESYS s.a., Angleur, Belgium;Department of Electrical Engineering and Computer Science,Grande Traverse, University of Liège, Liège - Sart Tilman Belgium;The Maersk Mc-Kinney Moller Institute, University ofSouthern Denmark, Odense M, Denmark;The Maersk Mc-Kinney Moller Institute, University ofSouthern Denmark, Odense M, Denmark;Max Planck Institute for Biological Cybernetics, Tübingen,Germany;Max Planck Institute for Biological Cybernetics, Tübingen,Germany

  • Venue:
  • International Journal of Robotics Research
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We discuss vision as a sensory modality for systems that interact flexibly with uncontrolled environments. Instead of trying to build a generic vision system that produces task-independent representations, we argue in favor of task-specific, learn-able representations. This concept is illustrated by two examples of our own work. First, our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split perceptual states so as to reduce perceptual aliasing. This results in an adaptive discretization of the perceptual space based on the presence or absence of visual features. Its extension, RLJC, additionally handles continuous action spaces. In contrast to the minimalistic visual representations produced by RLVC and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a non-parametric representation of grasp success likelihoods over gripper poses, which we call a grasp density. Thus, object detection in a novel scene simultaneously produces suitable grasping options.