Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Mode-Finding for Mixtures of Gaussian Distributions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Robot Dynamics Algorithm
Introduction to Algorithms
Robot Learning From Demonstration
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Algorithms for Inverse Reinforcement Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Accelerating reinforcement learning through imitation
Accelerating reinforcement learning through imitation
Segmenting motion capture data into distinct behaviors
GI '04 Proceedings of the 2004 Graphics Interface Conference
Style-based inverse kinematics
ACM SIGGRAPH 2004 Papers
Exploration and apprenticeship learning in reinforcement learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Correctness of Local Probability Propagation in Graphical Models with Loops
Neural Computation
Motion templates for automatic classification and retrieval of motion capture data
Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Complexity results and approximation strategies for MAP explanations
Journal of Artificial Intelligence Research
Nonparametric belief propagation
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
On Learning, Representing, and Generalizing a Task in a Humanoid Robot
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Factor graphs and the sum-product algorithm
IEEE Transactions on Information Theory
Creating Brain-Like Intelligence
Creating Brain-Like Intelligence
Hi-index | 0.00 |
A prerequisite for achieving brain-like intelligence is the ability to rapidly learn new behaviors and actions. A fundamental mechanism for rapid learning in humans is imitation: children routinely learn new skills (e.g., opening a door or tying a shoe lace) by imitating their parents; adults continue to learn by imitating skilled instructors (e.g., in tennis). In this chapter, we propose a probabilistic framework for imitation learning in robots that is inspired by how humans learn from imitation and exploration. Rather than relying on complex (and often brittle) physics-based models, the robot learns a dynamic Bayesian network that captures its dynamics directly in terms of sensor measurements and actions during an imitation-guided exploration phase. After learning, actions are selected based on probabilistic inference in the learned Bayesian network. We present results demonstrating that a 25-degree-of-freedom humanoid robot can learn dynamically stable, full-body imitative motions simply by observing a human demonstrator.