Calibrating a Cartesian Robot with Eye-on-Hand Configuration Independent of Eye-to-Hand Relationship

  • Authors:
  • R. K. Lenz;R. Y. Tsai

  • Affiliations:
  • Technische Univ. Munchen, Munchen, W. Germany;IBM Thomas J. Watson Research Center, Yorktown Heights, NY

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 1989

Quantified Score

Hi-index 0.14

Visualization

Abstract

A new approach is described for geometric calibration of Cartesian robots. This is part of a set of procedures for real-time 3-D robotics eye, eye-to-hand, and hand calibration which uses a common setup and calibration object, common coordinate systems, matrices, vectors, symbols, and operations and is especially suited to machine vision systems. The robot makes a series of automatically planned movement with a camera rigidly mounted at the gripper. At the end of each move, it takes a total of 90 ms to grab an image, extract image feature coordinates, and perform camera-extrinsic calibration. After the robot finishes all the movements, it takes only a few milliseconds to do the calibration. The key of this technique is that only one rotary joint is moving for each movement. This allows the calibration parameters to be fully decoupled, and converts a multidimensional problem into a series of one-dimensional problems. Another key is that eye-to-hand transformation is not needed at all during the computation.