Training and Application of a Visual Forward Model for a Robot Camera Head

  • Authors:
  • Wolfram Schenck;Ralf Möller

  • Affiliations:
  • Computer Engineering Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany;Computer Engineering Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany

  • Venue:
  • Anticipatory Behavior in Adaptive Learning Systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual forward models predict future visual data from the previous visual sensory state and a motor command. The adaptive acquisition of visual forward models in robotic applications is plagued by the high dimensionality of visual data which is not handled well by most machine learning and neural network algorithms. Moreover, the forward model has to learn which parts of the visual output are really predictable and which are not because they lack any corresponding part in the visual input. In the present study, a learning algorithm is proposed which solves both problems. It relies on predicting the mapping between pixel positions in the visual input and output instead of directly forecasting visual data. The mapping is learned by matching corresponding regions in the visual input and output while exploring different visual surroundings. Unpredictable regions are detected by the lack of any clear correspondence. The proposed algorithm is applied successfully to a robot camera head under additional distortion of the camera images by a retinal mapping. Two future applications of the final visual forward model are proposed, saccade learning and a task from the domain of eye-hand coordination.