Fast Approximate Energy Minimization via Graph Cuts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Algorithms for Inverse Reinforcement Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
A Generalized Representer Theorem
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Apprenticeship learning via inverse reinforcement learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning associative Markov networks
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Discriminative Learning of Markov Random Fields for Segmentation of 3D Scan Data
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Learning structured prediction models: a large margin approach
Learning structured prediction models: a large margin approach
ICML '06 Proceedings of the 23rd international conference on Machine learning
Apprenticeship learning using linear programming
Proceedings of the 25th international conference on Machine learning
Maximum entropy inverse reinforcement learning
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Bayesian inverse reinforcement learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Onboard contextual classification of 3-D point clouds with learned high-order Markov random fields
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Learning to search: structured prediction techniques for imitation learning
Learning to search: structured prediction techniques for imitation learning
Hi-index | 0.00 |
We propose a graph-based algorithm for apprenticeship learning when the reward features are noisy. Previous apprenticeship learning techniques learn a reward function by using only local state features. This can be a limitation in practice, as often some features are misspecified or subject to measurement noise. Our graphical framework, inspired from the work on Markov Random Fields, allows to alleviate this problem by propagating information between states, and rewarding policies that choose similar actions in adjacent states. We demonstrate the advantage of the proposed approach on grid-world navigation problems, and on the problem of teaching a robot to grasp novel objects in simulation.