Knowledge representation and inference for grasp affordances

  • Authors:
  • Karthik Mahesh Varadarajan;Markus Vincze

  • Affiliations:
  • Automation and Control Institute, TU Vienna, Austria;Automation and Control Institute, TU Vienna, Austria

  • Venue:
  • ICVS'11 Proceedings of the 8th international conference on Computer vision systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

Knowledge bases for semantic scene understanding and processing form indispensable components of holistic intelligent computer vision and robotic systems. Specifically, task based grasping requires the use of perception modules that are tied with knowledge representation systems in order to provide optimal solutions. However, most state-of-the-art systems for robotic grasping, such as the K- CoPMan, which uses semantic information in mapping and planning for grasping, depend on explicit 3D model representations, restricting scalability. Moreover, these systems lacks conceptual knowledge that can aid the perception module in identifying the best objects in the field of view for task based manipulation through implicit cognitive processing. This restricts the scalability, extensibility, usability and versatility of the system. In this paper, we utilize the concept of functional and geometric part affordances to build a holistic knowledge representation and inference framework in order to aid task based grasping. The performance of the system is evaluated based on complex scenes and indirect queries.