Challenges in building robots that imitate people
Imitation in animals and artifacts
Handsignals Recognition From Video Using 3D Motion Capture Data
WACV-MOTION '05 Proceedings of the IEEE Workshop on Motion and Video Computing (WACV/MOTION'05) - Volume 2 - Volume 02
Incremental Online Learning in High Dimensions
Neural Computation
Teaching robots by moulding behavior and scaffolding the environment
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
Neural Networks - 2006 Special issue: The brain mechanisms of imitation learning
Active learning with statistical models
Journal of Artificial Intelligence Research
On Learning, Representing, and Generalizing a Task in a Humanoid Robot
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
International Journal of Robotics Research
A survey of robot learning from demonstration
Robotics and Autonomous Systems
Recognition of human grasps by time-clustering and fuzzy modeling
Robotics and Autonomous Systems
A Probabilistic Model of Motor Resonance for Embodied Gesture Perception
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
International Journal of Robotics Research
Incremental Learning and Memory Consolidation of Whole Body Human Motion Primitives
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Automatic weight learning for multiple data sources when learning from demonstration
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Incremental clustering of gesture patterns based on a self organizing incremental neural network
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Learning Force-Based Robot Skills from Haptic Demonstration
Proceedings of the 2010 conference on Artificial Intelligence Research and Development: Proceedings of the 13th International Conference of the Catalan Association for Artificial Intelligence
Vision-based hand-gesture applications
Communications of the ACM
Teacher feedback to scaffold and refine demonstrated motion primitives on a mobile robot
Robotics and Autonomous Systems
Policy adaptation with tactile feedback
Proceedings of the 6th international conference on Human-robot interaction
Human and robot perception in large-scale learning from demonstration
Proceedings of the 6th international conference on Human-robot interaction
Design considerations of expressive bidirectional telepresence robots
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Learning the semantics of object-action relations by observation
International Journal of Robotics Research
Iterative learning of grasp adaptation through human corrections
Robotics and Autonomous Systems
Trajectories and keyframes for kinesthetic teaching: a human-robot interaction perspective
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
International Journal of Robotics Research
Tactile Guidance for Policy Adaptation
Foundations and Trends in Robotics
A robot learning from demonstration framework to perform force-based manipulation tasks
Intelligent Service Robotics
Embodied imitation-enhanced reinforcement learning in multi-agent systems
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Slaves no longer: review on role assignment for human-robot joint motor action
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Object-object interaction affordance learning
Robotics and Autonomous Systems
Hi-index | 0.02 |
We present an approach to teach incrementally human gestures to a humanoid robot. By using active teaching methods that puts the human teacher "in the loop" of the robot's learning, we show that the essential characteristics of a gesture can be efficiently transferred by interacting socially with the robot. In a first phase, the robot observes the user demonstrating the skill while wearing motion sensors. The motion of his/her two arms and head are recorded by the robot, projected in a latent space of motion and encoded bprobabilistically in a Gaussian Mixture Model (GMM). In a second phase, the user helps the robot refine its gesture by kinesthetic teaching, i.e. by grabbing and moving its arms throughout the movement to provide the appropriate scaffolds. To update the model of the gesture, we compare the performance of two incremental training procedures against a batch training procedure. We present experiments to show that different modalities can be combined efficiently to teach incrementally basketball officials' signals to a HOAP-3 humanoid robot.