KNOWROB: knowledge processing for autonomous personal robots
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
A functional model for affordance-based agents
Proceedings of the 2006 international conference on Towards affordance-based robot control
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
Parallel deep learning with suggestive activation for object category recognition
ICVS'13 Proceedings of the 9th international conference on Computer Vision Systems
Hi-index | 0.01 |
Knowledge bases for semantic scene understanding and processing form indispensable components of holistic intelligent computer vision and robotic systems. Specifically, task based grasping requires the use of perception modules that are tied with knowledge representation systems in order to provide optimal solutions. However, most state-of-the-art systems for robotic grasping, such as the K- CoPMan, which uses semantic information in mapping and planning for grasping, depend on explicit 3D model representations, restricting scalability. Moreover, these systems lacks conceptual knowledge that can aid the perception module in identifying the best objects in the field of view for task based manipulation through implicit cognitive processing. This restricts the scalability, extensibility, usability and versatility of the system. In this paper, we utilize the concept of functional and geometric part affordances to build a holistic knowledge representation and inference framework in order to aid task based grasping. The performance of the system is evaluated based on complex scenes and indirect queries.