Multisensory visual servoing by a neural network

  • Authors:
  • Guo-Qing Wei;G. Hirzinger

  • Affiliations:
  • Siemens Corp. Res. Inc., Princeton, NJ;-

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

Conventional computer vision methods for determining a robot's end-effector motion based on sensory data needs sensor calibration (e.g., camera calibration) and sensor-to-hand calibration (e.g., hand-eye calibration). This involves many computations and even some difficulties, especially when different kinds of sensors are involved. In this correspondence, we present a neural network approach to the motion determination problem without any calibration. Two kinds of sensory data, namely, camera images and laser range data, are used as the input to a multilayer feedforward network to associate the direct transformation from the sensory data to the required motions. This provides a practical sensor fusion method. Using a recursive motion strategy and in terms of a network correction, we relax the requirement for the exactness of the learned transformation. Another important feature of our work is that the goal position can be changed without having to do network retraining. Experimental results show the effectiveness of our method