Application of Lie Algebras to Visual Servoing

  • Authors:
  • Tom Drummond;Roberto Cipolla

  • Affiliations:
  • Department of Engineering, University of Cambridge, Trumpington St, Cambridge, CB2 1PZ. twd20@eng.cam.ac.uk;Department of Engineering, University of Cambridge, Trumpington St, Cambridge, CB2 1PZ. cipolla@eng.cam.ac.uk

  • Venue:
  • International Journal of Computer Vision - Special issue on image-based servoing
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

A novel approach to visual servoing is presented, which takes advantage of the structure of the Lie algebra of affine transformations. The aim of this project is to use feedback from a visual sensor to guide a robot arm to a target position. The target position is learned using the principle of ‘teaching by showing’ in which the supervisor places the robot in the correct target position and the system captures the necessary information to be able to return to that position. The sensor is placed in the end effector of the robot, the ‘camera-in-hand’ approach, and thus provides direct feedback of the robot motion relative to the target scene via observed transformations of the scene. These scene transformations are obtained by measuring the affine deformations of a target planar contour (under the weak perspective assumption), captured by use of an active contour, or snake. Deformations of the snake are constrained using the Lie groups of affine and projective transformations. Properties of the Lie algebra of affine transformations are exploited to provide a novel method for integrating observed deformations of the target contour. These can be compensated with appropriate robot motion using a non-linear control structure. The local differential representation of contour deformations is extended to allow accurate integration of an extended series of small perturbations. This differs from existing approaches by virtue of the properties of the Lie algebra representation which implicitly embeds knowledge of the three-dimensional world within a two-dimensional image-based system. These techniques have been implemented using a video camera to control a 5 DoF robot arm. Experiments with this implementation are presented, together with a discussion of the results.