Robotic eye-to-hand coordination: Implementing visual perception to object manipulation

  • Authors:
  • Shahram Jafari;Ray Jarvis

  • Affiliations:
  • Intelligent Robotics Research Centre (IRRC), Monash University, VIC 3800, Australia (Corresponding author. E-mail: shj53@yahoo.com);Intelligent Robotics Research Centre (IRRC), Monash University, VIC 3800, Australia

  • Venue:
  • International Journal of Hybrid Intelligent Systems - Recent developments in Hybrid Intelligent Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper integrates different novel intelligent concepts to perform scene analysis, hand-eye coordination and object manipulation to realize a concrete working robot named COERSU. Firstly, a robust tuner is presented to optimize the early visual processing based on genetic algorithms (GA). Then, a few architectures of the adaptive neuro-fuzzy inference system (ANFIS), multi-layer perceptron (MLP) and the K-nearest neighborhood (KNN) classifiers are compared to perform scene analysis and object recognition. Following on, new methods of performing eye-to-hand visual servoing based on neuro-fuzzy approaches are detailed and compared with relative visual servoing, a new method developed by the authors. Theoretical model, mathematical framework and convergence criteria for our visual servoing techniques are also provided. The experiments show that the performance of the hybrid intelligent methods converge to relative visual servoing in terms of accuracy. However, in terms of speed, hybrid intelligent methods outperform relative visual servoing. Snapshots of the experimental results from COERSU in a table-top scenario to manipulate some soft objects (e.g. fruit/egg) are provided to validate the methods.