Function-based reasoning for goal-oriented image segmentation

  • Authors:
  • Melanie A. Sutton;Louise Stark

  • Affiliations:
  • University of West Florida, Pensacola, Florida;University of the Pacific, Stockton, California

  • Venue:
  • Proceedings of the 2006 international conference on Towards affordance-based robot control
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Function-based object recognition provides the framework to represent and reason about object functionality as a means to recognize novel objects and produce plans for interaction with the world. When function can be perceived visually, function-based computer vision is consistent with Gibson's theory of affordances. Objects are recognized by their functional attributes. These attributes can be segmented out of the scene and given symbolic labels which can then be used to guide the search space for additional functional attributes. An example of such affordance-driven scene segmentation would be the process of attaching symbolic labels to the areas that afford sitting (functional seats) and using these areas to guide parameter selection for deriving nearby surfaces that potentially afford back support. The Generic Recognition Using Form and Function (GRUFF) object recognition system reasons about and generates plans for understanding 3-D scenes of objects by performing such a functional attribute-based labelling process. An avenue explored here is based on a novel approach of autonomously directing image acquisition and range segmentation by determining the extent to which surfaces in the scene meet specified functional requirements, or provide affordances associated with a generic category of objects.