A general learning approach to visually guided 3D-positioning and pose control of robot arms

  • Authors:
  • Jianwei Zhang;Alois Knoll

  • Affiliations:
  • Faculty of Technology, University of Bielefeld, 33501 Bielefeld, Germany;Faculty of Technology, University of Bielefeld, 33501 Bielefeld, Germany

  • Venue:
  • Biologically inspired robot behavior engineering
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a general learning approach to fine-positioning of a robot gripper in three-dimensional workspace using visual sensor data. This is a two-step approach: a) a hybrid representation for encoding the robot state perceived by visual sensors; b) partitioning the action space of the robot to let multiple specialized controllers evolve.The input encoding consists of representing position by geometric features and uniquely describing orientation by combination of principal components. Such a dimension-reduction procedure is essential to apply supervised as well as reinforcement learning. A fuzzy controller based on B-spline models serves as a function approximator using this encoded input and producing the motion parameters as outputs.A complex positioning and pose control task is divided into consecutive sub-tasks. Each subtask is solved by a specialized self-learning controller. The approach has been successfully applied to control 6-axes-robots translating the gripper in the three-dimensional workspace and rotating it about the z-axis. Instead of undergoing cumbersome hand-eye calibration processes, our system lets the controllers evolve using systematic perturbation motion around the desired position and orientation.