Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
RoboCup 2000: Robot Soccer World Cup IV
A Two-Stage Relational Reinforcement Learning with Continuous Actions for Real Service Robots
MICAI '09 Proceedings of the 8th Mexican International Conference on Artificial Intelligence
Reinforcement learning using Voronoi space division
Artificial Life and Robotics
Exploring continuous action spaces with diffusion trees for reinforcement learning
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
ICCOMP'06 Proceedings of the 10th WSEAS international conference on Computers
A dynamic route change mechanism for mobile ad hoc networks
International Journal of Communication Networks and Distributed Systems
Applying neural network to reinforcement learning in continuous spaces
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
Task-Driven discretization of the joint space of visual percepts and continuous actions
ECML'06 Proceedings of the 17th European conference on Machine Learning
Hi-index | 0.00 |
Q-learning can be used to learn a control policy that maximises a scalar reward through interaction with the environment. Q- learning is commonly applied to problems with discrete states and actions. We describe a method suitable for control tasks which require continuous actions, in response to continuous states. The system consists of a neureil network coupled with a novel interpolator. Simulation results are presented for a non-holonomic control task. Advantage Learning, a variation of Q-learning, is shown enhance learning speed and reliability for this task.