Position based visual servoing for catching a 3-D flying object using RLS trajectory estimation from a monocular image sequence

  • Authors:
  • R. Herrejon;S. Kagami;K. Hashimoto

  • Affiliations:
  • Intelligent Control Systems Laboratory, Department of System Information Sciences, Tohoku University, Sendai, Japan;Intelligent Control Systems Laboratory, Department of System Information Sciences, Tohoku University, Sendai, Japan;Intelligent Control Systems Laboratory, Department of System Information Sciences, Tohoku University, Sendai, Japan

  • Venue:
  • ROBIO'09 Proceedings of the 2009 international conference on Robotics and biomimetics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Online coordination of visual information with slow speed manipulator control is studied in the specific task of three dimensional robotic catching using position based visual servoing. The problem involves the design and application of a recursive algorithm to extract and predict the position of an object in a 3D environment from one feature correspondence from a monocular image sequence. Target translational model involves an object moving in a parabolic path using projectile physics. A state-space model is constructed incorporating kinematic states, and recursive techniques are used to estimate the state vector as a function of time. The measured data are the noisy image plane coordinates of object match taken from image in the sequence. Image plane noise levels are allowed and investigated. The target trajectory estimation is formulated as a tracking problem, which can use an arbitrary large number of images in a sequence and is done using Recursive Least Squares (RLS). Results are demonstrated by both simulations and experiments using a a real-time vision system and a six-degree-of-freedom robotic arm with speed capabilities of up to 1.0 m/s.