Practical Issues in Temporal Difference Learning
Machine Learning
When the best move isn't optimal: Q-learning with exploration
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Learning evaluation functions for global optimization and Boolean satisfiability
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
A unified analysis of value-function-based reinforcement learning algorithms
Neural Computation
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
The reinforcement learning algorithm TD(λ) applied to Markov decision processes is known to need added exploration in many cases. With the usual implementations of exploration in TD-learning, the feedback signals are either distorted or discarded, so that the exploration hurts the algorithm's learning. The present article gives a modification of the TD-learning algorithm that allows exploration without cost to the accuracy or speed of learning. The idea is that when the learning agent performs an action it perceives as inferior, it is compensated for its loss, that is, it is given an additional reward equal to its estimated cost of making the exploring move. This modification is compatible with existing exploration strategies, and is seen to work well when applied to a simple grid-world problem, even when always exploring completely at random.