Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Histograms of Oriented Gradients for Human Detection
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
2006 Special issue: Mirror neurons and imitation: A computationally guided review
Neural Networks - 2006 Special issue: The brain mechanisms of imitation learning
Incremental learning of gestures by imitation in a humanoid robot
Proceedings of the ACM/IEEE international conference on Human-robot interaction
A survey of robot learning from demonstration
Robotics and Autonomous Systems
Functional object class detection based on learned affordance cues
ICVS'08 Proceedings of the 6th international conference on Computer vision systems
Visual object-action recognition: Inferring object affordances from human demonstration
Computer Vision and Image Understanding
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Upper Body Detection and Tracking in Extended Signing Sequences
International Journal of Computer Vision
AMDO'06 Proceedings of the 4th international conference on Articulated Motion and Deformable Objects
Robot learning from demonstration by constructing skill trees
International Journal of Robotics Research
Functional categorization of objects using real-time markerless motion capture
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Learning to place new objects in a scene
International Journal of Robotics Research
Hi-index | 0.00 |
This paper presents a novel object-object affordance learning approach that enables intelligent robots to learn the interactive functionalities of objects from human demonstrations in everyday environments. Instead of considering a single object, we model the interactive motions between paired objects in a human-object-object way. The innate interaction-affordance knowledge of the paired objects are learned from a labeled training dataset that contains a set of relative motions of the paired objects, human actions, and object labels. The learned knowledge is represented with a Bayesian Network, and the network can be used to improve the recognition reliability of both objects and human actions and to generate proper manipulation motion for a robot if a pair of objects is recognized. This paper also presents an image-based visual servoing approach that uses the learned motion features of the affordance in interaction as the control goals to control a robot to perform manipulation tasks.