Model-based control of a robot manipulator
Model-based control of a robot manipulator
Optimal control: linear quadratic methods
Optimal control: linear quadratic methods
Robot Learning From Demonstration
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Algorithms for Inverse Reinforcement Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Integrated modeling and robust control for full-envelope flight of robotic helicopters
Integrated modeling and robust control for full-envelope flight of robotic helicopters
Apprenticeship learning via inverse reinforcement learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Using inaccurate models in reinforcement learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Analysis of sibling time series data: alignment and difference detection
Analysis of sibling time series data: alignment and difference detection
Learning for control from multiple demonstrations
Proceedings of the 25th international conference on Machine learning
Bayesian inverse reinforcement learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Context-specific independence in Bayesian networks
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
On Learning, Representing, and Generalizing a Task in a Humanoid Robot
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Learning behavior styles with inverse reinforcement learning
ACM SIGGRAPH 2010 papers
Learning the behavior model of a robot
Autonomous Robots
Applications of hybrid reachability analysis to robotic aerial vehicles
International Journal of Robotics Research
Human-assisted neuroevolution through shaping, advice and examples
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Trust of, in, and among adaptive systems
FOCS'10 Proceedings of the 16th Monterey conference on Foundations of computer software: modeling, development, and verification of adaptive systems
Bayesian Learning of Noisy Markov Decision Processes
ACM Transactions on Modeling and Computer Simulation (TOMACS) - Special Issue on Monte Carlo Methods in Statistics
Reinforcement learning in robotics: A survey
International Journal of Robotics Research
Hi-index | 0.00 |
Autonomous helicopter flight is widely regarded to be a highly challenging control problem. As helicopters are highly unstable and exhibit complicated dynamical behavior, it is particularly difficult to design controllers that achieve high performance over a broad flight regime. While these aircraft are notoriously difficult to control, there are expert human pilots who are nonetheless capable of demonstrating a wide variety of maneuvers, including aerobatic maneuvers at the edge of the helicopter's performance envelope. In this paper, we present algorithms for modeling and control that leverage these demonstrations to build high-performance control systems for autonomous helicopters. More specifically, we detail our experiences with the Stanford Autonomous Helicopter, which is now capable of extreme aerobatic flight meeting or exceeding the performance of our own expert pilot.