Reinforcement Learning in POMDP's via Direct Gradient Ascent
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
Policy Gradient in Continuous Time
The Journal of Machine Learning Research
Neurocomputing
ECML'05 Proceedings of the 16th European conference on Machine Learning
Policy gradients for cryptanalysis
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Multi-dimensional deep memory Atari-go players for parameter exploring policy gradients
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
Analysis and improvement of policy gradient estimation
Neural Networks
Objective improvement in information-geometric optimization
Proceedings of the twelfth workshop on Foundations of genetic algorithms XII
Efficient sample reuse in policy gradients with parameter-based exploration
Neural Computation
Policy oscillation is overshooting
Neural Networks
Hi-index | 0.00 |
We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than obtained by regular policy gradient methods. We show that for several complex control tasks, including robust standing with a humanoid robot, this method outperforms well-known algorithms from the fields of standard policy gradients, finite difference methods and population based heuristics. We also show that the improvement is largest when the parameter samples are drawn symmetrically. Lastly we analyse the importance of the individual components of our method by incrementally incorporating them into the other algorithms, and measuring the gain in performance after each step.