Visual control through the trifocal tensor for nonholonomic robots

  • Authors:
  • G. LóPez-NicoláS;J. J. Guerrero;C. SagüéS

  • Affiliations:
  • DIIS - I3A, Universidad de Zaragoza, C/ María de Luna 1, E-50018 Zaragoza, Spain;DIIS - I3A, Universidad de Zaragoza, C/ María de Luna 1, E-50018 Zaragoza, Spain;DIIS - I3A, Universidad de Zaragoza, C/ María de Luna 1, E-50018 Zaragoza, Spain

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new vision-based control approach which drives autonomously a nonholonomic vehicle to a target location. The vision system is a camera fixed on the vehicle and the target location is defined by an image taken previously in that location. The control scheme is based on the trifocal tensor model, which is computed from feature correspondences in calibrated retina across three views: initial, current and target images. The contribution is a trifocal-based control law defined by an exact input-output linearization of the trifocal tensor model. The desired evolution of the system towards the target is directly defined in terms of the trifocal tensor elements by means of sinusoidal functions without needing metric or additional information from the environment. The trifocal tensor presents important advantages for visual control purposes, because it is more robust than two-view geometry as it includes the information of a third view and, contrary to the epipolar geometry, short baseline is not a problem. Simulations show the performance of the approach, which has been tested with image noise and calibration errors.