Learning Visual Features to Recommend Grasp Configurations TITLE2:

  • Authors:
  • J. H. Piater

  • Affiliations:
  • -

  • Venue:
  • Learning Visual Features to Recommend Grasp Configurations TITLE2:
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper is a preliminary account of current work on a visual system that learns to aid in robotic grasping and manipulation tasks. Localized features are learned of the visual scene that correlate reliably with the orientation of a dextrous robotic hand during haptically guided grasps. On the basis of these features, hand configurations are recommended for future gasping operations. The learning process is instance-based, on-line and incremental, and the interaction between visual and haptic systems is loosely anthropomorphic. It is conjectured that critical spatial information can be learned on the basis of features of visual appearance, without explicit geometric representations or planning.