Perceiving affordances: A computational investigation of grasping affordances

  • Authors:
  • Roberto Prevete;Giovanni Tessitore;Ezio Catanzariti;Guglielmo Tamburrini

  • Affiliations:
  • Department of Physical Sciences, University of Naples Federico II, Naples, Italy;Department of Physical Sciences, University of Naples Federico II, Naples, Italy;Department of Physical Sciences, University of Naples Federico II, Naples, Italy;Department of Physical Sciences, University of Naples Federico II, Naples, Italy

  • Venue:
  • Cognitive Systems Research
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Grasping Affordance Model (GAM) introduced here provides a computational account of perceptual processes enabling one to identify grasping action possibilities from visual scenes. GAM identifies the core of affordance perception with visuo-motor transformations enabling one to associate features of visually presented objects to a collection of hand grasping configurations. This account is coherent with neuroscientific models of relevant visuo-motor functions and their localization in the monkey brain. GAM differs from other computational models of biological grasping affordances in the way of modeling focus, functional account, and tested abilities. Notably, by learning to associate object features to hand shapes, GAM generalizes its grasp identification abilities to a variety of previously unseen objects. Even though GAM information processing does not involve semantic memory access and full-fledged object recognition, perceptions of (grasping) affordances are mediated there by substantive computational mechanisms which include learning of object parts, selective analysis of visual scenes, and guessing from experience.