Complementary data fusion in vision-guide and control of robotic tracking

  • Authors:
  • Y. M. Chen;C. S. Hsueh

  • Affiliations:
  • Department of Electrical Engineering, Lee Ming Institute of Technology, Taipei 243, Taiwan (R.O.C.) ymchen@ccit.cc04.edu.tw;Chung Shan Institute of Science & Technology (Taiwan) g881707@cc04.ccit.edu.tw

  • Venue:
  • Robotica
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a data fusion control scheme for the hand-held camera of the SCORBOT-ER VII robot arm for learning visual tracking and interception. The control scheme consists of two modules: The first one generates candidate actions to drive the end-effector as accurate as possible directly above a moving target, so that the second module can handily take over to intercept it. The desired camera-joint coordinate mappings are generalized by Elman neural networks for a tracking module. The intercept module then determines a suitable intercept trajectory for the robot within the required conditions. The simulation results support the claim that it could be successfully applied to track and intercept a moving target.